SYSTEMS AND METHODS FOR HIERARCHICAL NETWORK MANAGEMENT
A method for managing a communications network includes providing a parent network manager in a parent domain of the communications network, and providing a child network manager in a child domain of the communications network. The parent network manager comprises at least one of a parent Service Oriented Network Auto Creation (SONAC) function and a parent MANagement and Orchestration (MANO) function. The child network manager comprises at least one of a child Service Oriented Network Auto Creation (SONAC) function and a child MANagement and Orchestration (MANO) function. The parent and child network managers cooperate to optimize management of the parent and child domains of the communications network.
Latest Huawei Technologies Co., Ltd. Patents:
- HEADSET, HEADSET ASSEMBLY, AND RELATED METHOD
- SYSTEMS AND METHODS FOR PROVIDING A LOW DELAY SERVICE IN A WIRELESS LOCAL AREA NETWORK (WLAN)
- CODEBOOK-BASED BEAMFORMING WITH RANDOM FOREST ON CONTENT ADDRESSABLE MEMORY
- APPARATUS AND METHOD FOR LIMITING INRUSH CURRENT AND EXTENDING HOLD-UP TIME IN AN AC-DC POWER CONVERTION APPARATUS
- DATA COMPRESSION METHOD AND APPARATUS, COMPUTING DEVICE, AND STORAGE SYSTEM
This application is based on, and claims benefit of, U.S. Provisional Application No. 62/415,778 filed Nov. 1, 2016, the entire content of which is hereby incorporated herein by reference.
FIELD OF THE INVENTIONThe present invention pertains to the field of communication networks, and in particular to systems and methods for Hierarchical Network Management.
BACKGROUNDNetwork functions virtualization (NFV) is a network architecture concept that uses the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create communication services. NFV relies upon, but differs from, traditional server-virtualization techniques, such as those used in enterprise IT. A virtualized network function (VNF) may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage devices, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function. For example, a virtual session border controller could be deployed to protect a network domain without the typical cost and complexity of obtaining and installing physical network protection units. Other examples of NFV include virtualized load balancers, firewalls, intrusion detection devices and WAN accelerators.
The NFV framework consists of three main components:
Virtualized network functions (VNFs) are software implementations of network functions that can be deployed on a network functions virtualization infrastructure (NFVI).
Network functions virtualization infrastructure (NFVI) is the totality of all hardware and software components that build the environment where VNFs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is considered as part of the NFV infrastructure.
Network functions virtualization MANagement and Orchestration (MANO) architectural framework (NFV-MANO Architectural Framework) is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.
The building block for both the NFVI and the NFV-MANO is the NFV platform. In the NFVI role, it consists of both virtual and physical processing and storage resources, and virtualization software. In its NFV-MANO role it consists of VNF and NFVI managers and virtualization software operating on a hardware controller. The NFV platform implements carrier-grade features used to manage and monitor the platform components, recover from failures and provide effective security—all required for the public carrier network.
Software-Defined Topology (SDT) is a logical network topology that may be used to implement a given network service instance. For example, for a cloud based database service, an SDT may comprise logical links between a client and one or more instances of a database service. As the name implies, an SDT will typically be generated by one or more software applications executing on a server. Logical topology determination is done by the SDT which prepares the Network Service Infrastructure (NSI) descriptor (NSLD) as the output. It may use an existing template of an NSI and add parameter values to it to create the NSLD, or it may create a new template and define the composition of the NSI.
Software Defined Protocol (SDP) is a logical End-to End (E2E) protocol that may be used by a given network service instance. For example, for a cloud based database service, an SDP may define a network slice to be used for communications between the client and each instance of the database service. As the name implies, an SDP will typically be generated by one or more software applications executing on a server.
Software-Defined Resource Allocation (SDRA) refers to the allocation of network resources for logical connections in the logical topology associated with a given service instance. For example, for a cloud based database service, an SDRA may use service requirements (such as Quality of Service, latency, etc) to define an allocation of physical network resources to the database service. As the name implies, an SDRA will typically be generated by one or more software applications executing on a server.
Service Oriented Network Auto Creation (SONAC) utilizes software-defined topology (SDT), software defined protocol (SDP), and software-defined resource allocation (SDRA) to create a network or virtual network for a given network service instance. In some cases, SONAC may be used to create a 3rd Generation Partnership Project (3GPP) slice using a virtualized infra-structure (SDT, SDP, and SDRA) to provide a Virtual Network (VN) service to an external customer. SONAC may be used to optimize the Network Management, and so may also be considered to be a Network Management (NM) optimizer.
Architecture options needed for the management plane in carrying out the tasks of SONAC are highly desirable.
This background information is provided to reveal information believed by the applicant to be of possible relevance to the present invention. No admission is necessarily intended, nor should be construed, that any of the preceding information constitutes prior art against the present invention.
SUMMARYAn object of embodiments of the present invention is to provide architecture options needed for the management plane in carrying out the tasks of Network Management optimization.
Accordingly, an aspect of the present invention provides a method for managing a communications network includes providing a parent network manager in a parent domain of the communications network, and providing a child network manager in a child domain of the communications network. The parent network manager comprises at least one of a parent Service Oriented Network Auto Creation (SONAC) function and a parent MANagement and Orchestration (MANO) function. The child network manager comprises at least one of a child Service Oriented Network Auto Creation (SONAC) function and a child MANagement and Orchestration (MANO) function. The parent and child network managers cooperate to optimize management of the parent and child domains of the communications network.
Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
DETAILED DESCRIPTIONThe 3rd Generation Partnership Project (3GPP) system needs to use a common Virtualized infra-structure for its VNF instantiation and associated resources. The virtualized infra-structure may be distributed at different geographical locations and under different Data Centers (DCs) controlled by their own local MANOs. For the purposes of the present disclosure, the term Data Center (DC) shall be understood to refer to any network domain capable of operating under the control of a local MANO and/or SONAC, whether or not such domain actually is doing so.
What is needed is a mechanism to use these resources for the 3GPP slices and services. However, there can be common VNFs, Network Elements (NEs) or other resources used by multiple services or slices and their usage may be dynamically controlled for different 3GPP slices and/or 3GPP services.
Various measurements and reporting needs to be done on resource usage by these VNFs and NEs specific to 3GPP services or slices. ETSI NFV MANO uses Network Services to segregate different 3GPP slices or services.
The present disclosure provides several mechanisms to use VNF instantiation and associated resources across different domain-level Network Management (NM) systems in a hierarchical manner. Each of these mechanisms is described below.
The bus 112 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or a video bus. The processor 106 may comprise any type of electronic data processor. The memory 108 may comprise any type of non-transitory system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), or a combination thereof. In specific embodiments, the memory 108 may include more than one type of memory, such as ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
The mass storage 114 may comprise any type of non-transitory storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 112. The mass storage 114 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, or an optical disk drive.
The video adapter 116 and the I/O interface 118 provide optional interfaces to couple external input and output devices to the ED 102. Examples of input and output devices include a display 124 coupled to the video adapter 116 and an I/O device 126 such as a touch screen coupled to the I/O interface 118. Other devices may be coupled to the ED 102, and additional or fewer interfaces may be utilized. For example, a serial interface such as Universal Serial Bus (USB) (not shown) may be used to provide an interface for an external device.
The electronic device 102 also includes one or more network interfaces 110, which may comprise wired links and/or wireless links to access one or more networks 120 or other devices. The network interfaces 110 allow the electronic device 102 to communicate with remote units via the networks 120. For example, the network interfaces 110 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas (collectively referenced at 122 in
In some embodiments, electronic device 102 may be a standalone device, while in other embodiments electronic device 102 may be resident within a data center. A data center, as will be understood in the art, is a collection of computing resources (typically in the form of servers) that can be used as a collective computing and storage resource. Within a data center, a plurality of servers can be connected together to provide a computing resource pool upon which virtualized entities can be instantiated. Data centers can be interconnected with each other to form networks consisting of pools computing and storage resources connected to each by connectivity resources. The connectivity resources may take the form of physical connections such as Ethernet or optical communications links, and may include wireless communication channels as well. If two different data centers are connected by a plurality of different communication channels, the links can be combined together using any of a number of techniques including the formation of link aggregation groups (LAGs). It should be understood that any or all of the computing, storage and connectivity resources (along with other resources within the network) can be divided between different sub-networks, in some cases in the form of a resource slice. If the resources across a number of connected data centers or other collection of nodes are sliced, different network slices can be created.
In some embodiments, the electronic device 102 may be an element of communications network infrastructure, such as a base station (for example a NodeB, an enhanced Node B (eNodeB), a next generation NodeB (sometimes referred to as a gNodeB or gNB), a home subscriber server (HSS), a gateway (GW) such as a packet gateway (PGW) or a serving gateway (SGW) or various other nodes or functions within an evolved packet core (EPC) network. In other embodiments, the electronic device 102 may be a device that connects to network infrastructure over a radio interface, such as a mobile phone, smart phone or other such device that may be classified as a User Equipment (UE). In some embodiments, ED 102 may be a Machine Type Communications (MTC) device (also referred to as a machine-to-machine (m2m) device), or another such device that may be categorized as a UE despite not providing a direct service to a user. In some references, an ED 102 may also be referred to as a mobile device (MD), a term intended to reflect devices that connect to mobile network, regardless of whether the device itself is designed for, or capable of, mobility.
The processor 106, for example, may be provided as any suitable combination of: one or more general purpose micro-processors and one or more specialized processing cores such as Graphic Processing Units (GPUs) or other so-called accelerated processors (or processing accelerators).
The application platform 204 provides the capabilities for hosting applications and includes a virtualization manager 210 and application platform services 212. The virtualization manager 210 supports a flexible and efficient multi-tenancy run-time and hosting environment for applications 214 by providing Infrastructure as a Service (IaaS) facilities. In operation, the virtualization manager 210 may provide a security and resource “sandbox” for each application being hosted by the platform 204. Each “sandbox” may be implemented as a Virtual Machine (VM) image 216 that may include an appropriate operating system and controlled access to (virtualized) hardware resources 206 of the server 200. The application-platform services 212 provide a set of middleware application services and infrastructure services to the applications 214 hosted on the application platform 204, as will be described in greater detail below.
Applications 214 from vendors, service providers, and third-parties may be deployed and executed within a respective Virtual Machine 216. For example, MANagement and Orchestration (MANO) functions and Service Oriented Network Auto-Creation (SONAC) functions (or any of Software Defined Networking (SDN), Software Defined Topology (SDT), Software Defined Protocol (SDP), and Software Defined Resource Allocation (SDRA) controllers) may be implemented by means of one or more applications 214 hosted on the application platform 204 as described above. Communication between applications 214 and services in the server 200 may conveniently be designed according to the principles of Service-Oriented Architecture (SOA) known in the art.
Communication services 218 may allow applications 214 hosted on a single server 200 to communicate with the application-platform services 212 (through pre-defined Application Programming Interfaces (APIs) for example) and with each other (for example through a service-specific API).
A Service registry 220 may provide visibility of the services available on the server 200. In addition, the service registry 220 may present service availability (e.g. status of the service) together with the related interfaces and versions. This may be used by applications 214 to discover and locate the end-points for the services they require, and to publish their own service end-point for other applications to use.
Mobile-edge Computing allows cloud application services to be hosted alongside mobile network elements, and also facilitates leveraging of the available real-time network and radio information. Network Information Services (NIS) 222 may provide applications 214 with low-level network information. For example, the information provided by MS 222 may be used by an application 214 to calculate and present high-level and meaningful data such as: cell-ID, location of the subscriber, cell load and throughput guidance.
A Traffic Off-Load Function (TOF) service 224 may prioritize traffic, and route selected, policy-based, user-data streams to and from applications 214. The TOF service 224 may be supplied to applications 224 in various ways, including: A Pass-through mode where (uplink and/or downlink) traffic is passed to an application 214 which can monitor, modify or shape it and then send it back to the original Packet Data Network (PDN) connection (e.g. 3GPP bearer); and an End-point mode where the traffic is terminated by the application 214 which acts as a server.
The virtualization layer 208 and the application platform 204 may be collectively referred to as a Hypervisor.
It will also be understood that server 200 may itself be a virtualized entity. Because a virtualized entity has the same properties as a physical entity from the perspective of another node, both virtualized and physical computing platforms may serve as the underlying resource upon which virtualized functions are instantiated.
MANO, (SONAC), SDN, SDT, SDP and SDRA functions may in some embodiments be incorporated into a SONAC controller.
As may be appreciated, the server architecture of
Other virtualization technologies are known or may be developed in the future that may use a different functional architecture of the server 200. For example, Operating-System-Level virtualization is a virtualization technology in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. Such instances, which are sometimes called containers, virtualization engines (VEs) or jails (such as a “FreeBSD jail” or “chroot jail”), may emulate physical computers from the point of view of applications running in them. However, unlike virtual machines, each user space instance may directly access the hardware resources 206 of the host system, using the host systems kernel. In this arrangement, at least the virtualization layer 208 of
The SLM 320 may include a Cross-service Optimizer 332 a Slice configuration manager (SL-CM) 334, a Slice Fault Manager (SL-FM) 336, a Service Instance-specific Configuration Manager (SI-CM) 338 and a Service Instance-specific Performance Manager (SI-PM) 340. The Cross-service Optimizer 332 may operate to optimize, for each slice, the allocation of slice resources to one or more services. The SL-CM 334, SL-FM 336, SI-CM 338 and SI-PM 340 may operate to provide slice-specific configuration and fault management functions, and Service-Instance-specific configuration and performance management functions as will be described in greater detail below.
At each layer of the management hierarchy there are four network management options, depending on the interworking mechanism of SONAC and MANO. These options are as follows:
Option 1: SONAC interacts with enhanced MANO. In this option, the MANO NFVO interface is enhanced to accept SONAC commands as service requests or service request updates. (i.e. 0-intelligence MANO)
Option 2. SONAC-in-MANO. In this case, the MANO NFVO functionality is enhanced to allow forwarding of graph modification within the MANO entity.
Option 3. SONAC works alone without the assistance from MANO. This option is applicable only to the telecom network
Option 4. MANO works alone without the assistance from SONAC. This option is applicable only to Data Center (DC) networks.
In some embodiments, the SONAC and MANO may be co-resident in a common network manager (e.g. either one or both of the GNM 308 or a DNM 306). In other embodiments the SONAC may be resident in the GNM 308, while the MANO is resident in a DNM 306, or vice versa. In the illustrated example, the SONAC 402 is represented by a Software Defined Topology (SDT) controller 406, a Software Defined Protocol (SDP) controller 408 and a Software Defined Resource Allocation (SDRA) controller 410, while the MANO 404 is represented by a Network Function Virtualization Orchestrator (NFVO) 412, an Virtual Network Function Manager (VNFM) 414 and a Virtualized Infrastructure Manager 416). The SDT controller 406, SDP controller 408 and SDRA controller 410 of the SONAC 402 interact with each other to implement optimization of the network or network domain controlled by the SONAC 402. Similarly, the NFVO 412, VNFM 414 and VIM 416 of the MANO 404 interact with each other to implement network function management within the network or network domain controlled by the MANO 404. In some embodiments, each of the NFVO 412, VNFM 414 and VIM 416 of the MANO 404 may configured to interact directly with the SDT controller 406, SDP controller 408 and SDRA controller 410 of the SONAC 402.
In some embodiments, the SONAC 502 and MANO 504 may be co-resident in a common network manager (e.g. either one or both of the GNM 308 or a DNM 306). In other embodiments the SONAC may be resident in the GNM 308, while the MANO is resident in a DNM 306, or vice versa. The SONAC 502 and MANO 504 are similar to the SONAC 402 and MANO 404 of
Information sent from a DM 306 to the NM 318 (e.g. the CM 326) for Domain abstraction may include:
-
- Number of Virtual machines (VMs); number of CPUs (and per CPU processing speed), memory, disk storage, maximum disk IOPS (in bits or bytes per second);
- incoming line cards, outgoing line cards, per line card IOPS (in bits or bytes per second);
- average internal packet switching delay (in number of packets per second, from one incoming line card to one out going line card) or per in/out line card pair packet switching delay.
Information sent from a DM 306 to the NM 318 (e.g. the CM 326) for Domain exposure may include:
-
- Domain network topology
- Node capability: which may comprise the same information described above for domain abstraction, and, in the case of a radio node, the number of Radio Bearers (RBs) and the maximum transmit power;
- Link capability: which may include bandwidth; and, in the case of a wireless link, the (average) spectral efficiency.
Information exchanged between a DM 306 and the NM 318 (e.g. the CM 326) for NFV negotiation may include:
-
- From NM to DM: A proposal including Network Functions (NFs) to be hosted, NF-specific properties (such as impact on traffic rate), NF-specific compute resource requirements, NF interconnection and associated QoS requirements, ingress NF (and possibly desired ingress line card), egress NF (and possibly desired egress line card), per line card maximum rate support needed for incoming or outgoing traffic.
- From DM to NM: A Notification of proposal acceptance; or a counter proposal; or Cost update (or initialization) including per-NF hosting cost, NF-specific compute resource allocation, ingress line card, ingress traffic rate and cost, egress line card, egress traffic rate and cost.
Information sent from the NM 318 (e.g. the CM 326) to a DM 306 for NE configuration common to all slices, or from the SLM 320 to a DM 306 for NE configuration (per service or per slice), may include:
-
- In the case of domain abstraction:
- NFs to be hosted, NF interconnection and associated QoS requirements, ingress NF (and possibly desired incoming line card to be used for the NF), egress NF (and possibly desired outgoing line card to be used for the NF), per line card maximum rate support needed for incoming or outgoing traffic, and in the case of virtualization, NF-specific properties (including impact on traffic rate), NF-specific compute resource requirements
- NF-specific operation parameter configuration
- In the case of domain exposure:
- NF location within the domain, NF interconnection and associated QoS requirements, ingress NF (and desired incoming line card to be used for the NF), egress NF (and desired outgoing line card to be used for the NF), per line card maximum rate support needed for incoming or outgoing traffic, and in the case of virtualization, NF-specific properties (including impact on traffic rate), NF-specific compute resource requirements
- NF-specific operation parameter configuration
- In the case of domain abstraction:
Information sent from the NM 318 (e.g. the PM 328 and/or FM 330) to a DM 306 for network-level NF-specific performance/fault monitoring configuration common to all slices, or from the SLM 320 to a DM 306 for NF-specific performance/fault monitoring configuration (per service or per slice), may include:
-
- Time intervals for performance report, to enable periodic reporting, for example. In some embodiments, a predetermined value, such as “infinity”, may indicate that reporting is disabled.
- Threshold values for performance report, for example to enable performance change (either increase or decrease) triggers reporting. In some embodiments, a predetermined value, such as “infinity”, may indicate that reporting is disabled.
- Threshold values for fault alarm, such as, for example, a Performance degradation threshold.
In some embodiments, a predetermined value, such as “infinity”, may indicate that alarm is disabled.
Information sent from the DM 306 to the SLM 320 (e.g. the SI-PM 340) for per-service and/or per-slice performance monitoring may include:
-
- In the case of domain abstraction:
- line card performance, such as Per line card IO delay.
- Internal switching performance, such as internal packet switching delay (in number of packets per second, from one incoming line card to one out going line card) or per in/out line card pair packet switching delay.
- compute performance (per NF or overall), such as the number of VMs used (or available), number of CPUs used (or available), disk storage occupied (or available), disk IO delay
- In the case of domain exposure:
- Per node performance information similar to that described above for the case of domain abstraction; and in the case of a radio node, the number of Radio Bearers (RBs) used (or available)
- Per link performance: bandwidth used (or available); if wireless link, (average) spectral efficiency
- In the case of domain abstraction:
Information sent from a DM 306 to the ND 318 (e.g. the FM 330) for network-level fault alarming common to all slices, or from a DM 306 to the SLM 320 (e.g. the SL-FM 336) for per service or per slice fault alarming, may include:
-
- In the case of domain abstraction
- line card failure
- Internal switching failure for a particular in-out line card pair
- compute failure (per NF)
- In the case of domain exposure
- Node failure
- Link failure
- In the case of domain abstraction
In the example of
As may be seen in
In the embodiments of
It should be appreciated that one or more steps of the embodiment methods provided herein may be performed by corresponding units or modules. For example, a signal may be transmitted by a transmitting unit or a transmitting module. A signal may be received by a receiving unit or a receiving module. A signal may be processed by a processing unit or a processing module. Other steps may be performed by an establishing unit/module for establishing a serving cluster, an instantiating unit/module, an establishing unit/module for establishing a session link, an maintaining unit/module, other performing unit/module for performing the step of the above step. The respective units/modules may be hardware, software, or a combination thereof. For instance, one or more of the units/modules may be an integrated circuit, such as field programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs).
Although the present invention has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from the invention. The specification and drawings are, accordingly, to be regarded simply as an illustration of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present invention.
Claims
1. A method for managing a communications network, the method comprising:
- providing a parent network manager in a parent domain of the communications network, the parent network manager comprising at least one of a parent Service Oriented Network Auto Creation (SONAC) function and a parent MANagement and Orchestration (MANO) function; and
- providing a child network manager in a child domain of the communications network, the child network manager comprising at least one of a child Service Oriented Network Auto Creation (SONAC) function and a child MANagement and Orchestration (MANO) function;
- wherein at least one of the parent network manager and the child network manager comprises the Service Oriented Network Auto Creation (SONAC) function,
- the parent and child network managers cooperating to optimize management of the parent and child domains of the communications network.
2. The method as claimed in claim 1, wherein the child network manager represents the child domain to the parent network manager as a Network Function Virtualization Capable virtual node of the communications network.
3. The method as claimed in claim 1, wherein the child network manager is responsive to either one or both of network service request messages and virtual network function management messages from the parent network manager to implement network management decisions of the parent network manager within the child domain.
4. The method as claimed in claim 3, wherein an adaptation function is configured to adapt messages from the parent network manager and forward corresponding adapted messages to the child network manager.
5. The method as claimed in claim 4, wherein the adaptation function comprises replacing one or more identifiers in messages from the parent network manager with corresponding identifiers known by the child domain network manager.
6. A network management entity of a communications network, the network management entity comprising:
- a Service Oriented Network Auto Creation (SONAC) function including: a Software Defined Topology (SDT) controller configured to define a logical network topology; a Software Defined Protocol (SDP) controller configured to define a logical end-to-end protocol; and a Software Defined Resource Allocation (SDRA) controller configured to define an allocation of network resources for logical connection in the logical network topology; and
- a MANagement and Orchestration (MANO) function including a Network Function Virtualization Orchestrator (NFVO) configured to receive topology information from the Software Defined Topology (SDT) controller of the SONAC function.
7. The network management entity as claimed in claim 6, wherein the MANO function further comprises a Virtual Network Function Manager (VNFM) configured to receive protocol information from the SDP controller of the SONAC function.
8. The network management entity as claimed in claim 6, wherein the MANO function further comprises a Virtual Infrastructure Manager (VIM) configured to receive resource allocation data from the SDRA controller of the SONAC.
- a Virtual Network Function Manager (VNFM); and
- a Virtualized Infrastructure Manager
9. A network management entity of a communications network, the network management entity comprising:
- a Service Oriented Network Auto Creation (SONAC) function including: a Software Defined Topology (SDT) controller configured to define a logical network topology; a Software Defined Protocol (SDP) controller configured to define a logical end-to-end protocol; and a Software Defined Resource Allocation (SDRA) controller configured to define an allocation of network resources for logical connection in the logical network topology;
- wherein the SONAC is configured to implement functionality of a Network Function Virtualization Orchestrator NFVO function of a conventional MANO.
Type: Application
Filed: Oct 26, 2017
Publication Date: May 3, 2018
Applicant: Huawei Technologies Co., Ltd. (Shenzhen)
Inventors: Xu LI (Nepean), Nimal Gamini SENARATH (Ottawa)
Application Number: 15/794,318