SYSTEMS AND METHODS FOR NETWORK TOPOLOGY VALIDATION

- Dell Products L.P.

An information handling system may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the processor to determine a topology of connectivity of various components of a system comprising multiple information handling system chassis and apply validation rules to the topology to validate the topology.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present disclosure relates in general to information handling systems, and more particularly to providing for network topology validation in a system comprising multiple information handling system chassis.

BACKGROUND

As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.

In a system comprising multiple information handling system chassis, the various chassis may be of different types and may be cabled and wired together in a particular manner. A console for managing the system may need to determine the connectivity among the various chassis and components internal to the chassis in order to determine a topology of the system.

SUMMARY

In accordance with the teachings of the present disclosure, certain disadvantages and problems associated with validating a topology in a multi-chassis environment have been reduced or eliminated.

In accordance with embodiments of the present disclosure, an information handling system may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the processor to determine a topology of connectivity of various components of a system comprising multiple information handling system chassis and apply validation rules to the topology to validate the topology.

In accordance with these and other embodiments of the present disclosure, a method may include determining a topology of connectivity of various components of a system comprising multiple information handling system chassis and applying validation rules to the topology to validate the topology.

In accordance with these and other embodiments of the present disclosure, an article of manufacture may include a non-transitory computer readable medium and computer-executable instructions carried on the computer readable medium, the instructions readable by a processor, the instructions, when read and executed, for causing the processor to determine a topology of connectivity of various components of a system comprising multiple information handling system chassis apply validation rules to the topology to validate the topology.

Technical advantages of the present disclosure may be readily apparent to one skilled in the art from the figures, description and claims included herein. The objects and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims.

It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory and are not restrictive of the claims set forth in this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:

FIG. 1 illustrates a block diagram of a system comprising multiple information handling system chassis, in accordance with embodiments of the present disclosure;

FIG. 2 illustrates a flow chart of an example method for network topology discovery, in accordance with embodiments of the present disclosure;

FIG. 3 illustrates a flow chart of an example method for network topology validation, in accordance with embodiments of the present disclosure; and

FIG. 4 illustrates a block diagram of an information handling system, in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION

Preferred embodiments and their advantages are best understood by reference to FIGS. 1 through 4, wherein like numbers are used to indicate like and corresponding parts.

For the purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”) or hardware or software control logic. Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various I/O devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communication between the various hardware components.

For the purposes of this disclosure, information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, electro-mechanical devices (e.g., fans), displays, and power supplies.

For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (“RAM”), read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.

FIG. 1 illustrates a block diagram of a system 100 comprising multiple information handling system chassis 101, in accordance with embodiments of the present disclosure. As depicted in FIG. 1, system 100 may comprise a plurality of chassis 101 (e.g., chassis 101a, 101b, and 101c), each chassis 101 including information handling systems and/or information handling resources, a private inter-chassis network 150, and a console 120. For example, chassis 101a may include a switch 106a and a chassis management controller 112. As another example, chassis 101b may include a switch 106b and a chassis management controller 112. As a further example, chassis 101c may include switches 106c and 106d, one or more servers 102, and a chassis management controller 112. The various servers 102 and switches 106 may each comprise ports 110 for interfaces with one another, with example connectivity among ports 110 depicted in FIG. 1. For the purposes of clarity and exposition, chassis 101a and 101b are shown without servers (e.g., servers 102). However, in some embodiments, one or both of chassis 101a and 101b may include one or more servers 102.

A server 102 may generally be operable to receive data from and/or communicate data to one or more information handling resources of chassis 101c. In certain embodiments, a server 102 may comprise a blade server having modular physical design. In these and other embodiments, server 102 may comprise an M class server.

Each server 102 may include a host management controller 104. Host management controller 104 may be implemented by, for example, a microprocessor, microcontroller, DSP, ASIC, EEPROM, or any combination thereof. Host management controller 104 may be configured to communicate with chassis management controller 112. Such communication may be made, for example, via a private management network fabric integral to server 102 (not explicitly shown). Host management controller 104 may be configured to provide out-of-band management facilities for management of system 100. Such management may be made by information handling resources of chassis 101 even if system 100 is powered off or powered to a standby state. Host management controller 104 may include a processor, memory, and network connection separate from the rest of system 100. In certain embodiments, host management controller 104 may include or may be an integral part of a baseboard management controller (BMC) or an Integrated Dell Remote Access Controller (iDRAC).

A switch 106 may comprise any suitable system, device, or apparatus for receiving, processing, and forwarding packets. For example, each of switch 106a and 106b may serve as an interface between chassis 101c and a data network and may process and route packets between chassis 101c and such data network. As another example, switches 106c and 106d may each serve as an interface between servers 102 and other chassis 101 (e.g., 101a and 101b) external to chassis 101c.

Each port 110 may comprise a connector, slot, or another interface for receiving one end of a physical connection (e.g., wire, cable) coupled to a corresponding port 110 of another information handling resource.

A chassis management controller 112 may comprise any system, device, or apparatus configured to facilitate management and/or control of system 100 embodied by chassis 101, its information handling systems 102, and/or one or more of its component information handling resources. Chassis management controller 112 may be configured to issue commands and/or other signals to manage and/or control an information handling system 102 and/or information handling resources of system 100. Chassis management controller 112 may comprise a microprocessor, microcontroller, DSP, ASIC, field programmable gate array (“FPGA”), EEPROM, or any combination thereof. In some embodiments, chassis management controller 112 may provide a management console for user/administrator access to these functions. For example, chassis management controller 112 may implement Representational State Transfer (“REST”) or another suitable management protocol permitting a user to remotely access chassis management controller 112 to configure system 100 and its various information handling resources. In such embodiments, chassis management controller 112 may interface with a network interface, thus allowing for “out-of-band” control of chassis 101, such that communications to and from chassis management controller 112 are communicated via a management channel physically isolated from an “in-band” communication channel of chassis 101 for which non-management communication may take place. Thus, for example, if a failure occurs in chassis 101 that prevents an administrator from interfacing with chassis 101 via the in-band communication channel or a user interface associated with chassis 101 (e.g., power failure, etc.), the administrator may still be able to monitor and/or manage chassis 101 (e.g., to diagnose problems that may have caused failure) via chassis management controller 112. In the same or alternative embodiments, chassis management controller 112 may allow an administrator to remotely manage one or more parameters associated with operation of chassis 101 and its various information handling resources (e.g., power usage, processor allocation, memory allocation, security privileges, etc.). In some embodiments, chassis management controller 112 may include a management services module. Although beyond the scope of this disclosure, in some embodiments, one of the chassis management controllers 112 of the various chassis 101 may be selected as a “lead” chassis management controller 112, such that an administrator may manage the entirety of system 100 by interfacing with a single chassis management controller 112.

Private inter-chassis network 150 may comprise a network and/or fabric configured to couple information chassis management controllers 112 of the various chassis 101 in system 100 to each other. In these and other embodiments, inter-chassis network 150 may include a communication infrastructure, which provides physical connections, and a management layer, which organizes the physical connections and chassis management controllers 112 communicatively coupled to private inter-chassis network 150. Private inter-chassis network 150 may be implemented as, or may be a part of, an Ethernet local area network (LAN) or any other appropriate architecture or system that facilitates the communication of signals, data, and/or messages.

Console 120 may be communicatively coupled to private inter-chassis network 150 and may comprise an information handling system or a program executable on an information handling system for monitoring and management of the various chassis 101 of system 100 and their various components. For example, as described in greater detail below, console 120 may be able to, based on messages communicated among chassis 101 and management controllers 112, determine the topology of connectivity among the various components of system 100. In some embodiments, console 120 may also be capable of rendering a graphical representation of the topology of connectivity to a user via a user interface (not explicitly shown) of console 120. Although FIG. 1 depicts console 120 as a stand-alone component of system 100, in some embodiments, console 120 may be integral to or otherwise embodied in a chassis management controller 112.

In operation, console 120 may leverage payload information of packets (e.g., Link Layer Discovery Protocol or “LLDP” packets) communicated among switches 106 and/or servers 102 in response to a switch 106 or server 102 being coupled to another device in system 100 and may also leverage advertisements (e.g., multicast Domain Name Service or mDNS) communicated via private inter-chassis network 150 in order to extract information from such payloads and advertisements and process such information to determine topology of connectivity of devices of system 100.

For example, in response to a server 102 being added to system 100, a host management controller 104 of such server may communicate LLDP packets which may be received not only by switches 106c and 106d of the chassis 101c comprising the server 102, but may be also passed-through from switches 106c and 106d to switches 106a and 106b of chassis 101a and 101b, respectively. The payload of each such LLDP packet may include various identifying information, including without limitation identifying information (e.g., service tag or serial number) of the server 102 from which the LLDP packet originated, identifying information (e.g., service tag or serial number) of the chassis 101 comprising the server 102, identifying information (e.g., fully-qualified device descriptor) of the port 110 of the server 102 originating the LLDP packet, and the switch fabric (e.g., 106c or 106d) to which the packet-originating port 110 is coupled. Thus, based on such payload, a console 120 may be able to determine the chassis 101 housing the server 102, a switch fabric to which the server 102 is coupled, and connectivity between the server 102 and a switch 106 receiving the LLDP packet external to the chassis 101.

As another example, chassis management controllers 112 may communicate mDNS advertisements on private inter-chassis network 150. Such advertisements may include various identifying information, including without limitation identifying information (e.g., service tag or serial number) of a chassis 101 including a switch fabric, identifying information (e.g., slot number) of a switch 106 of the switch fabric, and a switch type for the switch 106. Accordingly, console 120 may correlate such information from mDNS advertisements to the information from LLDP payloads to determine the switch type of each switch fabric.

As a further example, a switch 106 may include metadata information (e.g., an I/O module or “IOM” descriptor file) that is specific to a switch type and uplink (e.g., connectivity between a server and switch 106c/106d) and downlink connectivity (connectivity between a switch 106c/106d and a switch 106a/106b) of a switch 106. Accordingly, console 120 may correlate such IOM metadata information to information from mDNS advertisements and/or information from LLDP payloads to determine the complete topology of each switch fabric.

Inter-switch connectivity (e.g., between switches 106a and 106b) and external network connectivity (e.g., between an external data network and switches 106a/106b) may be determined based on LLDP payloads communicated between switches (e.g., between switches 106a and 106b) and between individual switches (e.g., between switches 106a and 106b) and the external data network.

FIG. 2 illustrates a flow chart of an example method 200 for network topology discovery, in accordance with embodiments of the present disclosure. According to some embodiments, method 200 may begin at step 202. As noted above, teachings of the present disclosure may be implemented in a variety of configurations of system 100. As such, the preferred initialization point for method 200 and the order of the steps comprising method 200 may depend on the implementation chosen.

At step 202, console 120 may receive an LLDP packet, mDNS advertisement, or IOM metadata in response to a physical connection being made in system 100. At step 204, console 120 may process information present in the LLDP packet, mDNS advertisement, or IOM metadata and along with other information previously received from other LLDP packets, mDNS advertisements, or IOM metadata. At step 206, based on such processing, console 120 may update a table, list, map, database, or other data structure defining the topology of connectivity of the various components of system 100. At step 208, in some embodiments, console 120 may generate and output to a user interface a graphical display representative of the topology.

Although the foregoing contemplates that console 120 receives various traffic, it is noted that console 120 may not receive all of such traffic (e.g., console 120 may not receive LLDP packets). Instead, a switch 106 may receive LLDP packets and determine topology based on such packets. Thus, a complete topology of a system may be determined by the aggregate of traffic received by switches 106 and console 120.

Although FIG. 2 discloses a particular number of steps to be taken with respect to method 200, method 200 may be executed with greater or lesser steps than those depicted in FIG. 2. In addition, although FIG. 2 discloses a certain order of steps to be taken with respect to method 200, the steps comprising method 200 may be completed in any suitable order.

Method 200 may be implemented using system 100, and/or any other system operable to implement method 200. In certain embodiments, method 200 may be implemented partially or fully in software and/or firmware embodied in computer-readable media.

After building the topology as described above, console 120 may also be configured to validate the topology against a set of validation rules. To illustrate, in a multi-chassis group, such as system 100, that supports fabric-mode operations, the various I/O modules must typically be wired in prescriptive ways. After a topology is detected, it may be desirable that console 120 provide guidance to a user regarding any incorrect wiring in system 100. A topology could be incorrectly wired for numerous reasons. For example, a cable may be missing or not properly connected between its source and destination. As another example, cabling may violate prescriptive rules, such as a rule that dictates that a switch in one chassis 101 cannot be connected to a module in a different slot in a different chassis 101, a rule that dictates that wiring across different multi-chassis groups is not permitted, or a rule against mixing switch types in a chassis 101. Thus, validation rules may be complex rules that go beyond simply matching particular source ports and destination ports. Validation rules may be expressed on characteristics of I/O modules/switches 106 (e.g., slot location, mismatch of switch types in a chassis 101) or aggregate characteristics of chassis 101 in a group (e.g., group membership of chassis 101).

For further illustration, below are non-limiting examples of rules that may be applied by console 120 in validating a topology:

1) Specific source/destination port wiring—such a rule may be expressed in terms of a match of source and destination ports with specific port numbers, which may ensure that switches 106 are connected in a redundant manner.

2) Inter-chassis switch wiring: such a rule may be expressed in terms of a matching of a slot location for switches 106 disposed in different chassis. For example, if a first switch 106 is in a slot “A” of a first chassis 101 and is coupled to a second switch 106 of a second chassis 101, the rule may ensure that second switch 106 is in the slot “A” of the second chassis.

3) Group membership: such a rule may be expressed in terms of a match in chassis group membership between a device (e.g., switch 106) having a source port 110 and another device (e.g., switch 106) having a destination port 110 of a connection.

4) Switch types: such a rule may be expressed in terms of the switch type in an appropriate fabric location. For example, if two slots of a chassis 101 are populated with switches 106, this rule may ensure that both switches 106 are of the same switch type.

FIG. 3 illustrates a flow chart of an example method 300 for network topology validation, in accordance with embodiments of the present disclosure. According to some embodiments, method 300 may begin at step 302. As noted above, teachings of the present disclosure may be implemented in a variety of configurations of system 100. As such, the preferred initialization point for method 300 and the order of the steps comprising method 300 may depend on the implementation chosen.

At step 302, console 120 may determine the topology of system 100. In some embodiments, such topology may be constructed in accordance with method 200 described above.

At step 304, console 120 may apply a set of validation rules (e.g., one or more of the various rules discussed above) to the topology to determine if the topology satisfies the validation rules. If the topology satisfies the topology rules, method 300 may proceed again to step 302, and the topology may be continuously validated as the topology is changed. If the topology violates one or more of the topology rules, method 300 may proceed to step 306.

At step 306, in response to the topology violating one or more of the topology rules, console 120 may generate and output to a user interface an indication of which of the topology rules have been violated. Such indication may be in terms of a list or a graphical representation of the topology with indications within the graphical representation of the portion of the topology that are in violation of the topology rules. After completion of step 306, method 300 may proceed again to step 302, and the topology may be continuously validated as the topology is changed.

Although FIG. 3 discloses a particular number of steps to be taken with respect to method 300, method 300 may be executed with greater or lesser steps than those depicted in FIG. 3. In addition, although FIG. 3 discloses a certain order of steps to be taken with respect to method 300, the steps comprising method 300 may be completed in any suitable order.

Method 300 may be implemented using system 100, and/or any other system operable to implement method 300. In certain embodiments, method 300 may be implemented partially or fully in software and/or firmware embodied in computer-readable media.

FIG. 4 illustrates a block diagram of an information handling system 400, in accordance with embodiments of the present disclosure. Information handling system 400 is an instance of console 120 and/or a server 102 illustrated in FIG. 1. As shown, information handling system 400 includes one or more CPUs 402. In various embodiments, information handling system 400 may be a single-processor system including one CPU 402, or a multi-processor system including two or more CPUs 402 (e.g., two, four, eight, or any other suitable number). CPU(s) 402 may include any processor capable of executing program instructions. For example, in various embodiments, CPU(s) 402 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, POWERPC, ARM, SPARC, or MIPS ISAs, or any other suitable ISA. In multi-processor systems, each of CPU(s) 402 may commonly, but not necessarily, implement the same ISA. In such an embodiment, a motherboard may be configured to provide structural support, power, and electrical connectivity between the various aforementioned components. Such a motherboard may include multiple connector sockets in various configurations, adapted to receive pluggable circuit cards, component chip packages, etc.

CPU(s) 402 are coupled to northbridge controller or chipset 404 via front-side bus 406. Northbridge controller 404 may be configured to coordinate I/O traffic between CPU(s) 402 and other components. For example, in this particular implementation, northbridge controller 404 is coupled to graphics device(s) 408 (e.g., one or more video cards or adaptors, etc.) via graphics bus 410 (e.g., an Accelerated Graphics Port or AGP bus, a Peripheral Component Interconnect or PCI bus, etc.). Northbridge controller 404 is also coupled to system memory 412 via memory bus 414. Memory 412 may be configured to store program instructions and/or data accessible by CPU(s) 402. In various embodiments, memory 412 may be implemented using any suitable memory technology, such as static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.

Northbridge controller 404 is coupled to southbridge controller or chipset 416 via internal bus 418. Generally, southbridge controller 416 may be configured to handle various of computing device information handling system 400's I/O operations, and it may provide interfaces such as, for instance, Universal Serial Bus (USB), audio, serial, parallel, Ethernet, etc., via port(s), pin(s), and/or adapter(s) 432 over bus 434. For example, southbridge controller 416 may be configured to allow data to be exchanged between information handling system 400 and other devices, such as other information handling systems attached to a network. In various embodiments, southbridge controller 416 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fiber Channel SANs; or via any other suitable type of network and/or protocol.

Southbridge controller 416 may also enable connection to one or more keyboards, keypads, touch screens, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data. Multiple I/O devices may be present in information handling system 400. In some embodiments, I/O devices may be separate from information handling system 400 and may interact with information handling system 400 through a wired or wireless connection. As shown, southbridge controller 416 is further coupled to one or more PCI devices 420 (e.g., modems, network cards, sound cards, video cards, etc.) via PCI bus 422. Southbridge controller 416 is also coupled to Basic I/O System (BIOS) 424, Super I/O Controller 426, and Baseboard Management Controller (BMC) 428 via Low Pin Count (LPC) bus 430.

BIOS 424 includes non-volatile memory having program instructions stored thereon. Those instructions may be usable for CPU(s) 402 to initialize and test other hardware components and/or to load an Operating System (OS) onto information handling system 400. As such, BIOS 424 may include a firmware interface that allows CPU(s) 402 to load and execute certain firmware, as described in more detail below. In some cases, such firmware may include program code that is compatible with the Unified Extensible Firmware Interface (UEFI) specification, although other types of firmware may be used.

BMC controller 428 may include non-volatile memory having program instructions stored thereon that are usable by CPU(s) 402 to enable remote management of information handling system 400. For example, BMC controller 428 may enable a user to discover, configure, and manage BMC controller 428, setup configuration options, resolve and administer hardware or software problems, etc. Additionally or alternatively, BMC controller 428 may include one or more firmware volumes, each volume having one or more firmware files used by the BIOS' firmware interface to initialize and test components of information handling system 400.

Super I/O Controller 426 combines interfaces for a variety of lower bandwidth or low data rate devices. Those devices may include, for example, floppy disks, parallel ports, keyboard and mouse, temperature sensor and fan speed monitoring, etc. For example, the super I/O controller 426 may be coupled to the one or more upstream sensors and to the one or more downstream sensors.

In some cases, information handling system 400 may be configured to access different types of computer-accessible media separate from memory 412. Generally speaking, a computer-accessible medium may include any tangible, non-transitory storage media or memory media such as electronic, magnetic, or optical media—e.g., magnetic disk, a hard drive, a CD/DVD-ROM, a Flash memory, etc. coupled to information handling system 400 via northbridge controller 404 and/or southbridge controller 416.

As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.

This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative.

All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.

Claims

1. An information handling system comprising:

a processor; and
a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the processor to: determine a topology of connectivity of various components of a system comprising multiple information handling system chassis; and apply validation rules to the topology to validate the topology.

2. The information handling system of claim 1, the instructions for further causing the processor to generate an indication to a graphical display indicative of violations of the validation rules.

3. The information handling system of claim 1, wherein the validation rules comprise at least one rule based on characteristics of one or more switches integral to the system.

4. The information handling system of claim 3, wherein the at least one rule ensures that a plurality of switches integral to a chassis of the system are of the same switch type.

5. The information handling system of claim 3, wherein the at least one rule ensures that for a first switch of a first chassis of the system coupled to a second switch of a second chassis of the system, that the first switch and second switch are disposed in a particular respective slot of each of the first chassis and the second chassis.

6. The information handling system of claim 1, wherein the validation rules comprise at least one rule based on characteristics of the system.

7. The information handling system of claim 6, wherein the at least one rule ensures that for a first component of the system coupled to a second component of the system that a chassis group membership of the first component matches that of the second component.

8. The information handling system of claim 1, wherein the validation rules comprise at least one rule based on characteristics other than a match of a source port of a first component to a destination port of another component.

9. A method comprising:

determining a topology of connectivity of various components of a system comprising multiple information handling system chassis; and
applying validation rules to the topology to validate the topology.

10. The method of claim 9, further comprising generating an indication to a graphical display indicative of violations of the validation rules.

11. The method of claim 9, wherein the validation rules comprise at least one rule based on characteristics of one or more switches integral to the system.

12. The method of claim 11, wherein the at least one rule ensures that a plurality of switches integral to a chassis of the system are of the same switch type.

13. The method of claim 11, wherein the at least one rule ensures that for a first switch of a first chassis of the system coupled to a second switch of a second chassis of the system, that the first switch and second switch are disposed in a particular respective slot of each of the first chassis and the second chassis.

14. The method of claim 9, wherein the validation rules comprise at least one rule based on characteristics of the system.

15. The method of claim 14, wherein the at least one rule ensures that for a first component of the system coupled to a second component of the system that a chassis group membership of the first component matches that of the second component.

16. The method of claim 9, wherein the validation rules comprise at least one rule based on characteristics other than a match of a source port of a first component to a destination port of another component.

17. An article of manufacture comprising:

a non-transitory computer readable medium; and
computer-executable instructions carried on the computer readable medium, the instructions readable by a processor, the instructions, when read and executed, for causing the processor to: determine a topology of connectivity of various components of a system comprising multiple information handling system chassis; and apply validation rules to the topology to validate the topology.

18. The article of manufacture of claim 17, the instructions for further causing the processor to generate an indication to a graphical display indicative of violations of the validation rules.

19. The article of manufacture of claim 17, wherein the validation rules comprise at least one rule based on characteristics of one or more switches integral to the system.

20. The article of manufacture of claim 17, wherein the validation rules comprise at least one rule based on characteristics of the system.

Patent History
Publication number: 20180241631
Type: Application
Filed: Feb 23, 2017
Publication Date: Aug 23, 2018
Patent Grant number: 10785118
Applicant: Dell Products L.P. (Round Rock, TX)
Inventors: Sudhir Vittal SHETTY (Cedar Park, TX), Sanjay SANE (Fremont, CA), Balaji RAJAGOPALAN (Sunnyvale, CA)
Application Number: 15/440,932
Classifications
International Classification: H04L 12/24 (20060101); H04L 12/947 (20060101);