TENANT DHCP IN AN OVERLAY NETWORK

Systems, methods, and non-transitory computer-readable storage media for dynamic host configuration protocol (DHCP) relay functionality in overlay networks. A system on a overlay network fabric can first receive a DHCP request from a host device, the system including a tunnel endpoint (TEP) configured to connect the host device to the overlay network fabric via a tunnel. The system then enables a relay agent information option for relaying the DHCP request with sub-options inserted into the DHCP request, and inserts information into to the sub-options in the DHCP request to yield a modified DHCP request. Here, the information can include an address of the system and an interface of a circuit associated with the system, etc. Next, the system forwards the modified DHCP request to a destination DHCP server based on an address of the destination DHCP server associated with the DHCP request.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 61/900,359, filed on Nov. 5, 2013, the content of which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

The present technology pertains to dynamic host configuration protocol (DHCP), and more specifically pertains to relaying DHCP functionality in an overlay network.

BACKGROUND

Recent advancements in network technologies have allowed networks to support an increased demand for network data. In addition, networks have become larger and more complex, with massive amounts of devices joining the networks and communicating with each other. Yet as the size and complexity of a network grows, it becomes extremely difficult to manage the network settings of current devices and deploy new devices in the network. For example, with larger networks, implementing static addressing can be an extremely onerous task. On the other hand, dynamic and automated addressing schemes, such as DHCP, can be very difficult to implement, particularly in large and complex networks which often have various types of logical boundaries that prevent network settings from being distributed throughout the network. Unfortunately, this often leads to improper network settings on specific devices, which can create serious network connectivity issues. For example, improper network settings can prevent a device, such as a server, from being able to communicate on the network, and may result in addressing conflicts, which can even bring down a network.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:

FIG. 1 illustrates an example network device according to some aspects of the subject technology;

FIGS. 2A and 2B illustrate example system embodiments according to some aspects of the subject technology;

FIG. 3 illustrates a schematic block diagram of an example architecture for a network fabric;

FIG. 4 illustrates an example overlay network;

FIG. 5 a diagram of an example DHCP service implementation; and

FIG. 6 illustrates an example method embodiment.

DESCRIPTION OF EXAMPLE EMBODIMENTS

Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.

Overview

Data centers and networks are increasingly being built using virtual machines (VMs), virtual switches and routers, and physical networking devices with virtualization capabilities, such as virtual tunnel endpoints, in order to increase the size and capabilities of the network(s) by adding devices and virtual workloads using virtualization (e.g., overlay networks). Such virtualization devices often stack inside a hypervisor to forward packets inside of the host machine, or across host machines by leveraging an overlay network technology, such as virtual extensible LAN (VXLAN) technology. Such virtualization technologies also allow increasing numbers of devices, such as client devices and servers, to communicate on the network. This is at least partly a result of a greater number of network segments and addressing schemes available for use by devices to communicate on the network. For example, different routers and routing schemes can be used to allow clients to communicate across different network subnets, and even allow overlapping addresses to be used by a router without conflict.

This in turn can create an enormous challenge on system administrators in maintaining and deploying proper configuration settings for network devices, and automating service provisioning for devices on the network. For example, the complexity of a network with various addressing schemes and virtual network segments can prohibit DHCP service from being provided on the network, or otherwise limit DHCP service to only allow unique addresses or only serve devices connected to specific network segments or elements. Indeed, in some cases, it can be extremely difficult for a DHCP server to ascertain a proper address scope for selecting an address to be allocated to a device, or otherwise recognize, process, and relay DHCP messages appropriately.

The approaches set forth herein, on the other hand, can provide DHCP service to devices on any type of network, including overlay networks. In some cases, when a tenant DHCP request packet is sent, the ingress switch, such as the ingress leaf or top-of-rack (ToR) on a fabric, can insert its own IP address, such as its overlay VRF IP address, in the DHCP information option (DHCP Option 82), and subsequently act as a relay to forward DHCP messages to the tenant VRF. The DHCP server's response packet can be forwarded back to a switch that connects the DHCP server to the network fabric. The packet can then be forwarded to the pervasive switch virtual interface (SVI) IP address, and eventually received by one of the switches where the pervasive SVI is configured.

This receiving switch can look at the DHCP option 82 in the DHCP packet, which is retained (from the original DHCP request) in the DHCP response by the DHCP server, and identify the ingress switch connected to the host that originated the DHCP request. The receiving switch can then forward the DHCP packet to the ingress switch identified in the DHCP option 82, which can receive the DHCP packet and deliver it to the originating host. The originating host can thus receive the DHCP response to the DHCP request based on the address inserted by the ingress switch into the DHCP option 82.

In some cases, when allocating an address to the originating host, the DHCP server can determine the addressing scope based on the address of the ingress switch as indicated in the DHCP option 82. For example, if the gateway address of the ingress switch is in a class A IP network, the DHCP server can determine that the originating host should receive a class A IP address, and consequently identify an available class A IP address from its pool of available addresses in that scope. In some cases, the DHCP server can map the GI address to an address pool from which address assignment happens.

If the DHCP server supports multi-VRF, then the bridge domain (BD) virtual network identifier (VNID) can be identified as the subscriber identifier or the virtual private network identifier. On the other hand, if the DHCP server does not support multi-VRF, the BD VNID can be ascertained based on the relay agent's gateway address (GIADDR). In some cases, endpoint group (EPG) VNID can be encoded in the DHCP option 82 for EPG derivation to avoid BD-side flooding in stateless scenarios.

DESCRIPTION

The disclosed technology addresses the need in the art for accurate and efficient DHCP services in overlay solutions. Disclosed are systems, methods, and computer-readable storage media for DHCP services in overlay networks. A brief introductory description of relevant concepts, as well as example systems and networks, as illustrated in FIGS. 1 through 4, is first disclosed herein. A detailed description of DHCP services in overlay solutions, related concepts, and example variations, will then follow. These variations shall be described herein as the various embodiments are set forth. The disclosure now turns to an introductory description of relevant, networking concepts.

A computer network is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between endpoints, such as personal computers and workstations. Many types of networks are available, with the types ranging from local area networks (LANs) and wide area networks (WANs) to overlay and software-defined networks, such as virtual extensible local area networks (VXLANs).

LANs typically connect nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links. LANs and WANs can include layer 2 (L2) and/or layer 3 (L3) networks and devices.

The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol can refer to a set of rules defining how the nodes interact with each other. Computer networks may be further interconnected by an intermediate network node, such as a router, to extend the effective “size” of each network.

Overlay networks generally allow virtual networks to be created and layered over a physical network infrastructure. Overlay network protocols, such as Virtual Extensible LAN (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE), Network Virtualization Overlays (NVO3), and Stateless Transport Tunneling (STT), provide a traffic encapsulation scheme which allows network traffic to be carried across L2 and L3 networks over a logical tunnel. Such logical tunnels can be originated and terminated through virtual tunnel end points (VTEPs).

Moreover, overlay networks can include virtual segments, such as VXLAN segments in a VXLAN overlay network, which can include virtual L2 and/or L3 overlay networks over which VMs communicate. The virtual segments can be identified through a virtual network identifier (VNI), such as a VXLAN network identifier, which can specifically identify an associated virtual segment or domain.

Network virtualization allows hardware and software resources to be combined in a virtual network. For example, network virtualization can allow multiple numbers of VMs to be attached to the physical network via respective virtual LANs (VLANs). The VMs can be grouped according to their respective VLAN, and can communicate with other VMs as well as other devices on the internal or external network.

Network segments, such as physical or virtual segments; networks; devices; ports; physical or logical links; and/or traffic in general can be grouped into a bridge or flood domain. A bridge domain or flood domain can represent a broadcast domain, such as an L2 broadcast domain. A bridge domain or flood domain can include a single subnet, but can also include multiple subnets. Moreover, a bridge domain can be associated with a bridge domain interface on a network device, such as a switch. A bridge domain interface can be a logical interface which supports traffic between an L2 bridged network and an L3 routed network. In addition, a bridge domain interface can support internet protocol (IP) termination, VPN termination, address resolution handling, MAC addressing, etc. Both bridge domains and bridge domain interfaces can be identified by a same index or identifier.

Furthermore, endpoint groups (EPGs) can be used in a network for mapping applications to the network. In particular, EPGs can use a grouping of application endpoints in a network to apply connectivity and policy to the group of applications. EPGs can act as a container for buckets or collections of applications, or application components, and tiers for implementing forwarding and policy logic. EPGs also allow separation of network policy, security, and forwarding from addressing by instead using logical application boundaries.

Cloud computing can also be provided in one or more networks to provide computing services using shared resources. Cloud computing can generally include Internet-based computing in which computing resources are dynamically provisioned and allocated to client or user computers or other devices on-demand, from a collection of resources available via the network (e.g., “the cloud”). Cloud computing resources, for example, can include any type of resource, such as computing, storage, and network devices, virtual machines (VMs), etc. For instance, resources may include service devices (firewalls, deep packet inspectors, traffic monitors, load balancers, etc.), compute/processing devices (servers, CPU's, memory, brute force processing capability), storage devices (e.g., network attached storages, storage area network devices), etc. In addition, such resources may be used to support virtual networks, virtual machines (VM), databases, applications (Apps), etc.

Cloud computing resources may include a “private cloud,” a “public cloud,” and/or a “hybrid cloud.” A “hybrid cloud” can be a cloud infrastructure composed of two or more clouds that inter-operate or federate through technology. In essence, a hybrid cloud is an interaction between private and public clouds where a private cloud joins a public cloud and utilizes public cloud resources in a secure and scalable manner. Cloud computing resources can also be provisioned via virtual networks in an overlay network, such as a VXLAN.

The Dynamic Host Configuration Protocol (DHCP) is a protocol used in IP networks for dynamically distributing network settings to devices connecting to the network. In larger networks, static addressing can become extremely onerous. To this end, DHCP allows automated provisioning of network addressing to devices on the network. Thus, when a device connects to the network, it can send a DHCP request for network configuration settings to a DHCP server, which maintains a list of used and available network settings to allow the DHCP server to allocate addresses without creating addressing conflicts. The configurations settings can include, for example, an IP address, a subnet mask, a gateway address, a dynamic naming server (DNS) address, etc.

In networks with different network subnets, a relay agent can be implemented to relay DHCP/BOOTP messages between clients and servers on different subnets. In some cases, a router or switch can be enabled to function as a relay agent to relay DHCP messages to and from a DHCP server across subnets. In addition, DHCP option 82 (DHCP relay agent information option) can be enabled to allow a relay agent to further insert additional information into a DHCP message. For example, DHCP option 82 can allow circuit-specific information to be inserted into the DHCP request relayed to the DHCP server. The DHCP option 82 can include multiple sub-options for inserting additional information. In some cases, the sub-options can include a circuit ID sub-option and a remote ID sub-option. The circuit ID can indicate which circuit the DHCP request originated from, while the remote ID can indicate the remote information of the circuit, which typically refers to information about the relay agent.

In some embodiments, the remote ID can include the tunnel endpoint (TEP) IP and the BD-VNID. Moreover, the circuit ID can include the interface (IfIndex) and the EPG VNID of the ingress interface. In other embodiments, the DHCP option 82 can include additional sub-options, such as a server ID override, which can include the pervasive SVI IP of the BD; a link ID selection, which can include the subnet of the pervasive IP; the GIADDR, which can include the interface IP facing the DHCP server; and a VPNID, which can include the VRF name of the client VRF.

Having provided an introductory description of relevant concepts, the disclosure now turns to FIG. 1, which illustrates an example network device 110 suitable for implementing the present invention. Network device 110 includes a master central processing unit (CPU) 162, interfaces 168, and a bus 115 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU 162 is responsible for executing packet management, error detection, and/or routing functions, such as miscabling detection functions, for example. The CPU 162 preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU 162 may include one or more processors 163 such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, processor 163 is specially designed hardware for controlling the operations of router 110. In a specific embodiment, a memory 161 (such as non-volatile RAM and/or ROM) also forms part of CPU 162. However, there are many different ways in which memory could be coupled to the system.

The interfaces 168 are typically provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the router 110. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 162 to efficiently perform routing computations, network diagnostics, security functions, etc.

Although the system shown in FIG. 1 is one specific network device of the present invention, it is by no means the only network device architecture on which the present invention can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used with the router.

Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 161) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc.

FIG. 2A, and FIG. 2B illustrate example system embodiments. The more appropriate embodiment will be apparent to those of ordinary skill in the art when practicing the present technology. Persons of ordinary skill in the art will also readily appreciate that other system embodiments are possible.

FIG. 2A illustrates a conventional system bus computing system architecture 200 wherein the components of the system are in electrical communication with each other using a bus 205. Exemplary system 200 includes a processing unit (CPU or processor) 210 and a system bus 205 that couples various system components including the system memory 215, such as read only memory (ROM) 220 and random access memory (RAM) 225, to the processor 210. The system 200 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 210. The system 200 can copy data from the memory 215 and/or the storage device 230 to the cache 212 for quick access by the processor 210. In this way, the cache can provide a performance boost that avoids processor 210 delays while waiting for data. These and other modules can control or be configured to control the processor 210 to perform various actions. Other system memory 215 may be available for use as well. The memory 215 can include multiple different types of memory with different performance characteristics. The processor 210 can include any general purpose processor and a hardware module or software module, such as module 1 232, module 2 234, and module 3 236 stored in storage device 230, configured to control the processor 210 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 210 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.

To enable user interaction with the computing device 200, an input device 245 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 235 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 200. The communications interface 240 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.

Storage device 230 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 225, read only memory (ROM) 220, and hybrids thereof.

The storage device 230 can include software modules 232, 234, 236 for controlling the processor 210. Other hardware or software modules are contemplated. The storage device 230 can be connected to the system bus 205. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 210, bus 205, display 235, and so forth, to carry out the function.

FIG. 2B illustrates an example computer system 250 having a chipset architecture that can be used in executing the described method and generating and displaying a graphical user interface (GUI). Computer system 250 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology. System 250 can include a processor 255, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor 255 can communicate with a chipset 260 that can control input to and output from processor 255. In this example, chipset 260 outputs information to output 265, such as a display, and can read and write information to storage device 270, which can include magnetic media, and solid state media, for example. Chipset 260 can also read data from and write data to RAM 275. A bridge 280 for interfacing with a variety of user interface components 285 can be provided for interfacing with chipset 260. Such user interface components 285 can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system 250 can come from any of a variety of sources, machine generated and/or human generated.

Chipset 260 can also interface with one or more communication interfaces 290 that can have different physical interfaces. Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 255 analyzing data stored in storage 270 or 275. Further, the machine can receive inputs from a user via user interface components 285 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 255.

It can be appreciated that example systems 200 and 250 can have more than one processor 210 or be part of a group or cluster of computing devices networked together to provide greater processing capability.

FIG. 3 illustrates a schematic block diagram of an example architecture 300 for a network fabric 312. The network fabric 312 can include spine switches 302A, 302B, . . . , 302N (collectively “302”) connected to leaf switches 304A, 304B, 304c, . . . , 304N (collectively “304”) in the network fabric 312.

Spine switches 302 can be L3 switches in the fabric 312. However, in some cases, the spine switches 302 can also, or otherwise, perform L2 functionalities. Further, the spine switches 302 can support various capabilities, such as 40 or 10 Gbps Ethernet speeds. To this end, the spine switches 302 can include one or more 40 Gigabit Ethernet ports. Each port can also be split to support other speeds. For example, a 40 Gigabit Ethernet port can be split into four 10 Gigabit Ethernet ports.

In some embodiments, one or more of the spine switches 302 can be configured to host a proxy function that performs a lookup of the endpoint address identifier to locator mapping in a mapping database on behalf of leaf switches 304 that do not have such mapping. The proxy function can do this by parsing through the packet to the encapsulated, tenant packet to get to the destination locator address of the tenant. The spine switches 302 can then perform a lookup of their local mapping database to determine the correct locator address of the packet and forward the packet to the locator address without changing certain fields in the header of the packet.

When a packet is received at a spine switch 302i, the spine switch 302i can first check if the destination locator address is a proxy address. If so, the spine switch 302i can perform the proxy function as previously mentioned. If not, the spine switch 302i can lookup the locator in its forwarding table and forward the packet accordingly.

Spine switches 302 connect to leaf switches 304 in the fabric 312. Leaf switches 304 can include access ports (or non-fabric ports) and fabric ports. Fabric ports can provide uplinks to the spine switches 302, while access ports can provide connectivity for devices, hosts, endpoints, VMs, or external networks to the fabric 312.

Leaf switches 304 can reside at the edge of the fabric 312, and can thus represent the physical network edge. In some cases, the leaf switches 304 can be top-of-rack (“ToR”) switches configured according to a ToR architecture. In other cases, the leaf switches 304 can be aggregation switches in any particular topology, such as end-of-row (EoR) or middle-of-row (MoR) topologies. The leaf switches 304 can also represent aggregation switches, for example.

The leaf switches 304 can be responsible for routing and/or bridging the tenant packets and applying network policies. In some cases, a leaf switch can perform one or more additional functions, such as implementing a mapping cache, sending packets to the proxy function when there is a miss in the cache, encapsulate packets, enforce ingress or egress policies, etc.

Moreover, the leaf switches 304 can contain virtual switching functionalities, such as a virtual tunnel endpoint (VTEP) function as explained below in the discussion of VTEP 408 in FIG. 4. To this end, leaf switches 304 can connect the fabric 312 to an overlay network, such as overlay network 400 illustrated in FIG. 4.

Network connectivity in the fabric 312 can flow through the leaf switches 304. Here, the leaf switches 304 can provide servers, resources, endpoints, external networks, or VMs access to the fabric 312, and can connect the leaf switches 304 to each other. In some cases, the leaf switches 304 can connect EPGs to the fabric 312 and/or any external networks. Each EPG can connect to the fabric 312 via one of the leaf switches 304, for example.

Endpoints 310A-E (collectively “310”) can connect to the fabric 312 via leaf switches 304. For example, endpoints 310A and 310B can connect directly to leaf switch 304A, which can connect endpoints 310A and 310B to the fabric 312 and/or any other one of the leaf switches 304. Similarly, endpoint 310E can connect directly to leaf switch 304C, which can connect endpoint 310E to the fabric 312 and/or any other of the leaf switches 304. On the other hand, endpoints 310C and 310D can connect to leaf switch 304B via L2 network 306. Similarly, the wide area network (WAN) can connect to the leaf switches 304C or 304D via L3 network 308.

Endpoints 310 can include any communication device, such as a computer, a server, a switch, a router, etc. In some cases, the endpoints 310 can include a server, hypervisor, or switch configured with a VTEP functionality which connects an overlay network, such as overlay network 400 below, with the fabric 312. For example, in some cases, the endpoints 310 can represent one or more of the VTEPs 408A-D illustrated in FIG. 4. Here, the VTEPs 408A-D can connect to the fabric 312 via the leaf switches 304. The overlay network can host physical devices, such as servers, applications, EPGs, virtual segments, virtual workloads, etc. In addition, the endpoints 310 can host virtual workload(s), clusters, and applications or services, which can connect with the fabric 312 or any other device or network, including an external network. For example, one or more endpoints 310 can host, or connect to, a cluster of load balancers or an EPG of various applications.

Although the fabric 312 is illustrated and described herein as an example leaf-spine architecture, one of ordinary skill in the art will readily recognize that the subject technology can be implemented based on any network fabric, including any data center or cloud network fabric. Indeed, other architectures, designs, infrastructures, and variations are contemplated herein.

FIG. 4 illustrates an exemplary overlay network 400. Overlay network 400 uses an overlay protocol, such as VXLAN, VGRE, VO3, or STT, to encapsulate traffic in L2 and/or L3 packets which can cross overlay L3 boundaries in the network. As illustrated in FIG. 4, overlay network 400 can include hosts 406A-D interconnected via network 402.

Network 402 can include a packet network, such as an IP network, for example. Moreover, network 402 can connect the overlay network 400 with the fabric 312 in FIG. 3. For example, VTEPs 408A-D can connect with the leaf switches 304 in the fabric 312 via network 402.

Hosts 406A-D include virtual tunnel end points (VTEP) 408A-D, which can be virtual nodes or switches configured to encapsulate and de-encapsulate data traffic according to a specific overlay protocol of the network 400, for the various virtual network identifiers (VNIDs) 410A-D. Moreover, hosts 406A-D can include servers containing a VTEP functionality, hypervisors, and physical switches, such as L3 switches, configured with a VTEP functionality. For example, hosts 406A and 406B can be physical switches configured to run VTEPs 408A-B. Here, hosts 406A and 406B can be connected to servers 404A-D, which, in some cases, can include virtual workloads through VMs loaded on the servers, for example.

In some embodiments, network 400 can be a VXLAN network, and VTEPs 408A-D can be VXLAN tunnel end points. However, as one of ordinary skill in the art will readily recognize, network 400 can represent any type of overlay or software-defined network, such as NVGRE, STT, or even overlay technologies yet to be invented.

The VNIDs can represent the segregated virtual networks in overlay network 400. Each of the overlay tunnels (VTEPs 408A-D) can include one or more VNIDs. For example, VTEP 408A can connect to virtual or physical devices or workloads residing in VNIDs 1 and 2; VTEP 408B can connect to virtual or physical devices or workloads residing in VNIDs 1 and 3, VTEP 408C can connect to virtual or physical devices or workloads residing in VNIDs 1, 2, 3, and another instance of VNID 2; and VTEP 408D can connect to virtual or physical devices or workloads residing in VNIDs 3 and 4, as well as separate instances of VNIDs 2 and 3. As one of ordinary skill in the art will readily recognize, any particular VTEP can, in other embodiments, have numerous VNIDs, including more than the 4 VNIDs illustrated in FIG. 4. Moreover, any particular VTEP can connect to physical or virtual devices or workloads residing in one or more VNIDs.

The traffic in overlay network 400 can be segregated logically according to specific VNIDs. This way, traffic intended for VNID 1 can be accessed by devices residing in VNID 1, while other devices residing in other VNIDs (e.g., VNIDs 2, 3, and 4) can be prevented from accessing such traffic. In other words, devices or endpoints in specific VNIDs can communicate with other devices or endpoints in the same specific VNIDs, while traffic from separate VNIDs can be isolated to prevent devices or endpoints in other specific VNIDs from accessing traffic in different VNIDs.

Each of the servers 404A-D and VMs 404E-L can be associated with a respective VNID or virtual segment, and communicate with other servers or VMs residing in the same VNID or virtual segment. For example, server 404A can communicate with server 404C and VM 404E because they all reside in the same VNID, viz., VNID 1. Similarly, server 404B can communicate with VMs 404F, 404H, and 404L because they all reside in VNID 2.

Each of the servers 404A-D and VMs 404E-L can represent a single server or VM, but can also represent multiple servers or VMs, such as a cluster of servers or VMs. Moreover, VMs 404E-L can host virtual workloads, which can include application workloads, resources, and services, for example. On the other hand, servers 404A-D can host local workloads on a local storage and/or a remote storage, such as a remote database. However, in some cases, servers 404A-D can similarly host virtual workloads through VMs residing on the servers 404A-D.

VTEPs 408A-D can encapsulate packets directed at the various VNIDs 1-4 in the overlay network 400 according to the specific overlay protocol implemented, such as VXLAN, so traffic can be properly transmitted to the correct VNID and recipient(s) (i.e., server or VM). Moreover, when a switch, router, or other network device receives a packet to be transmitted to a recipient in the overlay network 400, it can analyze a routing table, such as a lookup table, to determine where such packet needs to be transmitted so the traffic reaches the appropriate recipient. For example, if VTEP 408A receives a packet from endpoint 404B that is intended for endpoint 404H, VTEP 408A can analyze a routing table that maps the intended endpoint, endpoint 404H, to a specific switch that is configured to handle communications intended for endpoint 404H. VTEP 408A might not initially know, when it receives the packet from endpoint 404B, that such packet should be transmitted to VTEP 408D in order to reach endpoint 404H. Accordingly, by analyzing the routing table, VTEP 408A can lookup endpoint 404H, which is the intended recipient, and determine that the packet should be transmitted to VTEP 408D, as specified in the routing table based on endpoint-to-switch mappings or bindings, so the packet can be transmitted to, and received by, endpoint 404H as expected.

As one of ordinary skill in the art will readily recognize, the examples and technologies provided above are simply for clarity and explanation purposes, and can include many additional concepts and variations.

FIG. 5 illustrates a diagram 500 of an example DHCP service implementation. The DHCP service implementation can be on a fabric 312, which can include one or more VRF instances. In some embodiments, the fabric 312 can include a VRF-tenant 502 and a VRF-provider 504. The VRF-tenant 502 can refer to a VRF instance in tenant space within the fabric 312. On the other hand, the VRF-provider 504 can refer to a VRF instance in provider space within the fabric 312. As one of ordinary skill in the art will readily recognize, other VRF instances can also exist in other embodiments. In other words, the fabric 312 can include a single VRF or a multi-VRF, and the DHCP service implementation can function in either scenario.

The fabric 312 can include switches 506-514 which can connect the fabric 312 to non-fabric devices, such as clients, servers, L2 networks, L3 networks, etc. In some cases, the switches 506-514 can be TOR or leaf switches on the fabric. For example, the switches 506-514 can include leaf switches 304, as illustrated in FIG. 3. In addition, the switches 506-514 can include virtual tunneling capabilities in order to support an overlay solution. Thus, one or more of the switches 506-514 can serve as tunnel endpoints (TEPs) which can connect to a virtual tunnel endpoint (VTEP on the overlay network by encapsulating traffic through a virtual tunnel configured to enable communication between the overlay network and the underlying physical network.

Switch 506 can connect to the client 516 as well as host 518, allowing the client 516 and host 518 to communicate with the fabric 312. Switch 508 can similarly connect to host 518 and, like switch 506, can include TEP functionalities for establishing a virtual tunnel between the TEP in switch 508 and the VTEP 520 on the host 518.

As previously mentioned, the host 518 can include a VTEP 520, which can be configured to provide a virtual tunnel for communicating with TEPs on the fabric 312, such as the TEPs on switches 506 and 508. This way, host 518 can host clients, VMs, and/or virtual workloads, which can reside on an overlay space and connect to the underlying, physical network through a virtual tunnel established between the VTEP 520 and the TEPs on the switches 506 and 508 on the fabric 312. In FIG. 5, switch (ToR) 508 is shown hosting a client 522 which provides a DHCP service to hosts. The client 522 here can be a physical DHCP service, a VM running a DHCP service, or a DHCP service appliance, for example.

The connection between the client 516 and host 518 can be configured on the switches 506 and 508 as being in the same bridge domain 526 (BD 1). Thus, any communication flooded from one of the switches 506 and 508 to BD 1 will reach both the client 516 and host 518, as both reside on the same BD.

If BD 1 includes multiple subnets, it can create a DHCP challenge, where DHCP requests and responses may not cross the multiple subnets unless properly configured as described herein. Similarly, with multiple VRFs, a BD can have secondary or overlapping IP addresses, which can also create a DHCP challenge where DHCP requests and responses may not be properly relayed unless properly configured as described herein.

To avoid DHCP service problems resulting from multiple subnets, secondary or overlapping IP addresses, multiple routing instances in one or more VRFs, and any other DHCP relay problem, a smart relay solution can be implemented. Here, the DHCP relay agent information option (DHCP information option or DHCP option 82) can be enabled on one or more relay switches. For example, option 82 can be enabled on switch 506 to allow switch 506 to function as a relay agent. Thus, using option 82, switch 506 can insert additional information in a DHCP request to allow not only that request to be properly routed back to the originating client once a response is received from the DHCP server, but also ensure that the address information allocated to the originating client comports with the proper addressing scope.

For example, when receiving a DHCP request, switch 506 can insert its own IP address (i.e., its provider VRF IP address) into an option 82 sub-option in the DHCP request and forward the modified DHCP request back to the DHCP server 522. The DHCP server 522 can then extract the IP address of switch 506 from the DHCP request, and identify an available address for the originating host based on the scope of the address of switch 506 as indicated in the DHCP request. For example, if switch 506 has a class A IP address and is connected to the originating host, the DHCP server can check for available IP addresses in the class A range (i.e., 10.0.0.1 through 10.0.0.254—note that some addresses in the scope may be reserved such as 10.0.0.1 may be reserved for a gateway, and other addresses in the class A range may have other purposes, such as 10.0.0.255 may be a loopback address and 10.0.0.0 may be a broadcast address).

The DHCP server can then select network settings, including an IP address, in the proper scope, and forward the settings back to the originating host as a DHCP response. The DHCP response sent by the DHCP server can maintain the information inserted in the option 82 to allow any receiving device determine where the DHCP response should ultimately be sent (the receiving gateway). In other words, the IP address of switch 506 inserted into the option 82 by switch 506 at the time of receiving the DHCP request can allow the DHCP response to be forwarded from the DHCP server back to the switch 506. Once the switch 506 receives the DHCP response, it can forward the DHCP response back to the originating client, such as client 516. The originating client can then extract the information from the DHCP response and automatically and dynamically configure its network settings to allow it to connect to the network without creating a conflict, and without requiring manual, static addressing performed by the network admin.

To illustrate the process, assume that client 516 originates a DHCP request intended for DHCP server 518. Here, the client 516 can transmit a discover message on the subnet, VNID, or network segment of the switch 506, as a user datagram protocol (UDP). Switch 506, which connects to the client 516 can receive the message and relay the message forward. The switch 506 can be enabled to function as a relay agent with relay agent information option enabled, to allow switch 506 to insert additional information in the message so the message can be relayed across subnets, VRFs, BD, boundaries, segments, etc.

Upon receipt of the message, the switch 506 can insert its own IP address (GIADDR) into an option 82 sub-option and forward the message to the VTEP 520 on the host 518 on BD 1. The message is then received by the client on the host 520, which serves as DHCP server 522 and process the message to retrieve or allocate network configuration settings to the client 516.

The DHCP server 522 can then send a lease offer for to client 516. Here, the packet is routed back to the GIAddress. In some cases, the GIAddress can belong to multiple switches based on the pervasive SVI presence. Then, the receiving switch can look at the option 82 to redirect the packet to the originating switch (i.e., switch 506). Thus, the lease offer can be forwarded or redirected to the switch 506 connected to the client 516 based on the information in the option 82. In this example, the lease offer can be forwarded to switch 506 based on the IP address of switch 506 which was inserted into option 82 by switch 506 at the time switch 506 received the DHCP message from the client 516. Thus, irrespective of where the DHCP server 522 or any other device along the way sends the lease offer, the lease offer can always be directed back to the correct switch, switch 506, based on the information inserted into the option 82. The switch 506 will thus be able to receive the lease offer and relay it to the client 516 so the client can obtain the DHCP lease.

The information inserted into the option 82 can vary in different embodiments. Indeed, the option 82 sub-options used and the information inserted into the sub-options can vary depending on the specific environment, configuration settings, and/or circumstances. For example, the DHCP option 82 can include multiple sub-options for inserting additional information, as previously noted. In some cases, the sub-options can include a circuit ID sub-option and a remote ID sub-option. The circuit ID sub-option can indicate which circuit the DHCP request originated from, while the remote ID can indicate the remote information of the circuit, which typically refers to information about the relay agent.

In some embodiments, the remote ID sub-option can include the TEP IP and/or VNID of the BD in the overlay network (BD-VNID). Moreover, the circuit ID sub-option can include the interface (IfIndex) and the EPG VNID of the ingress interface. This can indicate what interface and VNID in the overlay network to use to forward messages to the specific EPG. In other embodiments, the DHCP option 82 can include additional sub-options, such as a server ID override, which can include the pervasive SVI IP of the BD, to indicate where to forward a message to the BD when the virtual interface is spread out over multiple physical devices, for example; a link ID selection, which can include the subnet of the pervasive IP; the GIADDR, which can include the interface IP facing the DHCP server; and a VPNID, which can include the VRF name of the client VRF, such as “VRF Tenant” from 502 in FIG. 5.

Accordingly, the DHCP service can function even in environments with multiple BDs and/or VRFs. For example, if a DHCP request is sent from client 516 to switch 506 and later forwarded to a DHCP server on a second VRF, such as DHCP server 524 on provider VRF 504, the DHCP response or lease offer can still be relayed back to the switch 506 connected to the client 516 and further to the client 516 based on the information inserted into the DHCP option 82. In other words, the information provided in the DHCP option 82 can relay DHCP messages across multiple VRFs, VLANs, VNIDs, subnets, BDs, or any other boundary; and the type of information included in the DHCP option 82 can depend on the type of environment or boundaries that need to be crossed. As previously mentioned, the information inserted in the DHCP option 82 can include an address of the ingress switch associated with the originating host, information identifying the VRF of the ingress switch, information identifying the VNID and/or EPG of a host and/or switch for relaying the DHCP messages back to the host, the circuit information, the gateway information, interface information, Pervasive SVI IP information, VPNID information, remote ID information, tunneling information (e.g., TEP information, including physical TEP or virtual TEP), BD information, etc.

Having disclosed some basic system components and concepts, the disclosure now turns to the exemplary method embodiment shown in FIG. 6. For the sake of clarity, the method is described in terms of a switch 506, as shown in FIG. 5, configured to practice the method. The steps outlined herein are exemplary and can be implemented in any combination thereof, including combinations that exclude, add, or modify certain steps.

At step 600, the switch 506 first receives a DHCP request from a host device, the switch 506 being a TEP configured to connect the host device to the overlay fabric network 312 via a tunnel. The switch 506 can be a leaf switch, such as leaf switch 304, a TOR switch, an edge device on the fabric 312, an ingress switch on the fabric 312 connecting the host device to the fabric 312, etc. Moreover, the host device can be a client device, such as a user terminal or mobile device; a server; a resource, such as a printer or gaming system; a virtual machine; etc. Further, the DHCP request can be a DHCP lease request or DHCP discover message, for example.

At step 602, the switch 506 enables the relay agent information option for relaying the DHCP request with sub-option fields on the DHCP request to insert information into at least one of the sub-option fields in the DHCP request. When enabled, the relay agent information option allows DHCP option 82 to be used in the DHCP messages. As previously explained, DHCP option 82 can allow sub-options in the DHCP messages for information to be inserted into the sub-options of the DHCP messages, to further expand or augment the information in the DHCP messages.

The information inserted into the DHCP option 82 can include address information associated with the ingress switch (i.e., switch 506) communicating with the host device (for example the IP address of the ingress switch), information identifying the VRF of the ingress switch, information identifying the VNID and/or EPG for relaying the DHCP messages back to the host device, the circuit information (e.g., circuit ID), the gateway information (e.g., GIADDR), interface information (e.g., IfIndex), Pervasive SVI IP information, VPNID information, remote ID information, tunneling information (e.g., TEP information, including physical TEP or virtual TEP), VLAN information, BD information, etc.

In addition, the switch 506 can serve as a relay agent for DHCP messages. Here, the switch 506 can use the information in the DHCP option 82 to relay DHCP messages across boundaries, such as subnets, VNIDs, VLANs, EPGs, BDs, circuits, VRFs, segments, etc.

At step 604, the switch 506 inserts information into to one or more sub-option fields in the DHCP request to yield a modified DHCP request, the information including an address of the switch 506 and/or an interface of a circuit associated with the switch 506. For example, the switch 506 can insert its TEP IP and/or BD-VNID into a sub-option in the DHCP request. In some cases, the switch 506 can also insert a circuit ID, which can include the interface index and EPG VNID of the ingress interface. In yet other cases, the switch 506 can include other information in various sub-options, including a pervasive SVI IP of the BD, a subnet of the pervasive SVI IP, a gateway address associated with the DHCP server, a VRF name, a MAC address of the switch 506, etc.

At step 606, the switch 506 then forwards the modified DHCP request to a destination DHCP server based on an address of the destination DHCP server associated with the DHCP request. In other words, the switch 506 relays the DHCP request to the DHCP server. The address, such as IP or media access control (MAC) address, of the DHCP server can be indicated in the DHCP request, such as the header of the DHCP request, for example. Thus, the switch 506 can forward the DHCP request to the address of the DHCP server as indicated on the DHCP request. However, in some embodiments, the DHCP server address can be configured on the switch 506 or listed on a table on the switch 506 such that the switch 506 can determine where to send any DHCP requests that it receives, even if such requests do not specify an address for the DHCP server. For example, in some cases, the DHCP request may not include an address of a DHCP server, but the switch 506 can nevertheless relay the DHCP request to the DHCP server either by performing a lookup or flooding the request to multiple addresses or an address group. Indeed, in some cases, the DHCP request may indicate 0.0.0.0 as the destination address, which would prompt the request to be flooded by the switch 506 to the network and/or the segment or subnet of the DHCP server.

The DHCP server then receives the DHCP request and generates a lease offer or DHCP response. The DHCP response can include an IP address, a subnet mask, a DNS IP, a gateway IP, etc. Moreover, the DHCP response can also preserve the information inserted into the DHCP request through the DHCP option 82, to allow the DHCP response to be relayed back to the proper switch and ultimately the proper host device. The DHCP server then sends the DHCP response which is relayed back to the switch 506 based on the information inserted into the sub-options in the DHCP request. The switch 506 then receives the DHCP response and relays it to the host device.

The host device subsequently receives the DHCP response and applies the network settings in the DHCP response according to the lease offer. Accordingly, the host device can automatically and dynamically receive the network configuration settings it needs to communicate on the network, without creating addressing conflicts with other devices, which could cause severe problems. Moreover, the host device can receive the network settings in the DHCP response even when connecting to an overlay network with many different boundaries which would otherwise prevent DHCP information from being relayed across such boundaries.

For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.

In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.

Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.

Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.

The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.

Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Moreover, claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim.

Claims

1. A method comprising:

receiving, via a receiving switch on an overlay fabric network, a dynamic host configuration protocol (DHCP) request from a host device, the receiving switch comprising a tunnel endpoint (TEP) configured to connect the host device to the overlay fabric network via a tunnel;
enabling a relay agent information option for relaying the DHCP request with sub-option fields on the DHCP request, the sub-option fields for inserting information into the DHCP request;
inserting information into to at least one of the sub-option fields in the DHCP request to yield a modified DHCP request, the information comprising at least one of an address of the receiving switch and an interface of a circuit associated with the receiving switch; and
forwarding the modified DHCP request to a destination DHCP server based on an address of the destination DHCP server associated with the DHCP request.

2. The method of claim 1, wherein the information is inserted into a sub-option field in the DHCP request based on the relay agent information option, wherein the sub-option field comprises a remote identifier, the address of the receiving switch comprising at least one of a TEP IP address, a receiving switch media access control (MAC) address, and an overlay virtual routing and forwarding (VRF) IP address, and the relay agent information option being option 82.

3. The method of claim 1, wherein the receiving switch comprises at least one of a top-of-rack switch, a leaf switch, a virtual switch, an edge device in the overlay network fabric, a virtual tunnel endpoint (VTEP), an ingress switch in the overlay network fabric, and a port in a pervasive switch virtual interface.

4. The method of claim 1, wherein the information further comprises at least one of a circuit identifier, a server identifier override, a link identifier selection, a gateway interface address, and a virtual network identifier.

5. The method of claim 4, wherein the circuit identifier comprises at least one of an interface index value associated with an ingress interface and an endpoint group (EPG) virtual network identifier (VNID) associated with the ingress interface.

6. The method of claim 4, wherein the server identifier override comprises a pervasive switch virtual interface (SVI) IP address of a bridge domain associated with the receiving switch.

7. The method of claim 6, wherein the link identifier selection comprises a subnet of the pervasive SVI IP address.

8. The method of claim 4, wherein the gateway interface address (GIADDR) comprises one of an IP address associated with an interface facing the destination DHCP server or a pervasive IP address associated with a bridge domain in a virtual routing and forwarding (VRF) instance.

9. The method of claim 4, wherein the virtual network identifier (VNID) comprises a virtual routing and forwarding (VRF) name.

10. The method of claim 1, wherein virtual machines reside in the overlay network fabric and communicate with the overlay network fabric via a tunnel provided by the receiving switch.

11. The method of claim 1, wherein the overlay network fabric comprises at least one of a virtual extensible local area network (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE) network, a Stateless transport tunneling (STT) network, a spine-leaf network, and a CLOS network, the method further comprising:

receiving a response to the DHCP request from the DHCP server, the response comprising a DHCP lease offer; and
relaying the response to the host device.

12. A system comprising:

a processor; and
a computer-readable storage medium having stored therein instructions which, when executed by the processor, cause the processor to perform operations comprising: receiving, via a receiving switch on an overlay network, a dynamic host configuration protocol (DHCP) request from a device, the receiving switch comprising a tunnel endpoint (TEP) configured to connect the device to the overlay network via a tunnel; enabling a relay agent information option for DHCP requests on the receiving switch, the relay agent information option providing sub-option fields for inserting information into the DHCP request for relaying additional information into the DHCP request; inserting information into to the sub-options in the DHCP request to yield a modified DHCP request, the information comprising at least one of an address of the receiving switch and interface information associated with a circuit where the receiving switch resides; and relaying the modified DHCP request to a destination DHCP server based on an address of the destination DHCP server associated with the DHCP request.

13. The system of claim 12, the computer-readable storage medium storing additional instructions which, when executed by the processor, result in an operation further comprising:

receiving a response to the DHCP request from the destination DHCP server, the response comprising a DHCP lease offer; and
relaying the response to the device based on routing information contained in at least one of the response and the DHCP request.

14. The system of claim 12, wherein the sub-option fields comprise a first field for indicating a remote identifier, and a second field for indicating the address of the receiving switch comprising an overlay virtual routing and forwarding (VRF) IP address, the relay agent information option being option 82.

15. The system of claim 12, wherein the receiving switch comprises at least one of a top-of-rack switch, a leaf switch, a virtual switch, an edge device in the overlay network fabric, a virtual tunnel endpoint (VTEP), and a port in a pervasive switch virtual interface.

16. The system of claim 12, wherein the information further comprises at least one of a circuit identifier, a server identifier override, a link identifier selection, a gateway interface address, and a virtual private network identifier.

17. A non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor, cause the processor to perform operations comprising:

receiving, via a receiving switch on an overlay network, a dynamic host configuration protocol (DHCP) request from a host, the receiving switch comprising a tunnel endpoint (TEP) configured to connect the host to the overlay network via a tunnel;
enabling a relay agent information option for DHCP requests on the receiving switch, the relay agent information option providing sub-option fields for inserting information into the DHCP request for inserting additional information into the DHCP request prior to forwarding the DHCP request;
inserting information into to at least one of the sub-options in the DHCP request to yield a modified DHCP request, the information comprising at least one of an address of the receiving switch and an interface of a circuit associated with the receiving switch; and
forwarding the modified DHCP request to a destination DHCP server based on an address of the destination DHCP server associated with the DHCP request.

18. The non-transitory computer-readable storage medium of claim 17, storing additional instructions which, when executed by the processor, result in operations further comprising:

receiving a response to the DHCP request from the destination DHCP server; and
forwarding the response to the device based on routing information contained in at least one of the response and the DHCP request.

19. The non-transitory computer-readable storage medium of claim 17, wherein the information is inserted into a sub-option field in the DHCP request based on the relay agent information option, wherein the sub-option field comprises a remote identifier, the address of the receiving switch comprising an overlay virtual routing and forwarding (VRF) IP address, and the relay agent information option being option 82, and wherein the receiving switch comprises at least one of a top-of-rack switch, a leaf switch, a virtual switch, an edge device in the overlay network fabric, a virtual tunnel endpoint (VTEP), and a port in a pervasive switch virtual interface.

20. The non-transitory computer-readable storage medium of claim 17:

wherein the information further comprises at least one of a circuit identifier, a server identifier override, a link identifier selection, a gateway interface address, and a virtual private network identifier;
wherein the circuit identifier comprises at least one of an interface index value associated with an ingress interface and an endpoint group (EPG) virtual network identifier (VNID) associated with the ingress interface;
wherein the server identifier override comprises a pervasive switch virtual interface (SVI) IP address of a broadcast domain associated with the receiving switch;
wherein the link identifier selection comprises a subnet of the pervasive SVI IP address;
wherein the gateway interface address (GIADDR) comprises one of an interface IP of an interface facing the destination DHCP server or a pervasive IP address associated with a bridge domain in a virtual routing and forwarding (VRF) instance; and
wherein the virtual network identifier (VNID) comprises a virtual routing and forwarding (VRF) name.
Patent History
Publication number: 20150124823
Type: Application
Filed: Sep 11, 2014
Publication Date: May 7, 2015
Inventors: Ayaskant Pani (Fremont, CA), Sanjay Thyamagundalu (Santa Clara, CA)
Application Number: 14/484,165
Classifications
Current U.S. Class: Processing Of Address Header For Routing, Per Se (370/392)
International Classification: H04L 12/715 (20060101); H04L 29/06 (20060101); H04L 29/12 (20060101);