NEW-GENERATION SERVER TECHNOLOGYEnhancing Network Availability and Performanceon the <strong>Dell</strong> <strong>Power</strong>Edge 1855 Blade Server UsingNetwork TeamingNetwork interface card teaming and LAN on Motherboard teaming can provide organizationswith a cost-effective method to quickly and easily enhance network reliabilityand throughput. This article discusses network teaming on the <strong>Dell</strong> <strong>Power</strong>Edge 1855blade server and expected functionality using different network configurations.BY MIKE J. ROBERTS, DOUG WALLINGFORD, AND BALAJI MITTAPALLIThe modular <strong>Dell</strong> <strong>Power</strong>Edge 1855 blade server integratesup to 10 server blades into a highly dense,highly integrated 7U enclosure, including two GigabitEthernet switch modules or Gigabit Ethernet pass-throughmodules and an embedded remote management module.Each integrated switch module is an independent Layer 2switching device, with 10 internal ports connected to theintegrated LAN on Motherboards (LOMs) on the serverblades and six 10/100/1000 Mbps external uplink ports(a 10:6 ratio). Each integrated pass-through module isan independent 10-port device directly connecting theindividual server blade’s integrated LOM to a dedicatedexternal RJ-45 port (a 1:1 ratio). Unlike a switch module,a pass-through module can connect only to 1000 Mbpsports on external switches; pass-through modules do notoffer support for 10 Mbps or 100 Mbps connections.Each server blade in the <strong>Dell</strong> <strong>Power</strong>Edge 1855 bladeserver has two embedded LOMs based on a single dualportIntel ® 82546GB Gigabit 1 Ethernet Controller. The twoLOMs reside on a 64-bit, 100 MHz Peripheral ComponentInterconnect Extended (PCI-X) bus. The LOMs arehardwired to the internal ports of the integrated switchesor pass-through modules over the midplane, a passiveboard with connectors and electrical traces that connectsthe server blades in the front of the chassis with theinfrastructure in the rear. The LOMs provide dedicated1000 Mbps full-duplex connections (see Figure 1). LOM 1on each server blade connects to an internal port ofswitch 1 or pass-through module 1, and LOM 2 on eachserver blade connects to the counterpart port of switch 2or pass-through module 2. Note: The second switch orpass-through module is optional. However, two installedswitches or pass-through modules can enable additionalconnectivity or network redundancy and fault toleranceprovided that the limitations and capabilities of these featuresare fully understood and implemented correctly.One important distinction between a blade serverand other types of servers is that the connection betweenthe LOM and internal ports of the integrated I/O module(switch or pass-through) is hardwired through the midplane.This design enables the link between the LOM andthe integrated switch to be almost always in an up, orconnected, state—unless either a LOM or a switch portfails. The link can remain active even in the absence of anetwork connection between the external uplink ports onthe integrated switch and the external network.1This term does not connote an actual operating speed of 1 Gbps. For high-speed transmission, connection to a Gigabit Ethernet server and network infrastructure is required.24POWER SOLUTIONS Reprinted from <strong>Dell</strong> <strong>Power</strong> <strong>Solutions</strong>, February 2005. Copyright © 2005 <strong>Dell</strong> Inc. All rights reserved. February 2005
NEW-GENERATION SERVER TECHNOLOGYThe connection between the LOM and internalports of the integrated I/O module is critical in a networkteaming scenario where the integrated switchis being used. This is because, to trigger a failoverevent, the teaming software is looking for only theloss of the link between the LOM and the first switch.(Failures to cables or switch ports outside the enclosuredo not trigger failover events.)The scenario is different if a pass-through moduleis used. With a pass-through module, the link is in aconnected state only if a network connection existsbetween the external ports on the pass-through moduleand the switch port outside the enclosure—as is truefor a stand-alone server. Thus, for the pass-through module, theteaming software triggers a failover event if the LOM, pass-throughport, cable, or external switch port fails.NIC and LOM teaming benefitsNetwork interface card (NIC) teaming and LOM teaming can helpensure high availability and enhance network performance. Teamingcombines two or more physical NICs or LOMs on a single server intoa single logical device, or virtual adapter, to which one IP address canbe assigned. The teaming approach requires teaming software—alsoknown as an intermediate driver—to gather physical adapters into ateam that behaves as a single virtual adapter. In the case of the <strong>Dell</strong><strong>Power</strong>Edge 1855 blade server,the intermediate driver is IntelAdvanced Networking Services(iANS). Intermediate driversserve as a wrapper around oneor more base drivers, providingan interface between thebase driver and the networkprotocol stack. In this way,the intermediate driver gainscontrol over which packets aresent to which physical interfaceon the NICs or LOMs as wellas other properties essential toteaming.A virtual adapter canhelp provide fault tolerance(depending on the type offailure) and bandwidth aggregation(depending on the teaming mode selected). If one of thephysical NICs or LOMs—or the internal switch ports to whichthey are connected—fails, then the IP address remains accessiblebecause it is bound to the logical device instead of to a singlephysical NIC or LOM.One important distinctionbetween a blade server andother types of servers isthat the connection betweenthe LOM and internal portsof the integrated I/O module(switch or pass-through)is hardwired throughthe midplane.Integrated switch 1Six 10/100/1000 Mbpsuplink portsTen 1000 Mbpsinternal portsSix 10/100/1000 Mbpsuplink portsTen 1000 Mbpsinternal portsIntegrated switch 2Server enclosureLOM 1LOM 2LOM 1LOM 2MidplaneFigure 1. <strong>Dell</strong> <strong>Power</strong>Edge 1855 blade server architecture showing integrated switches and pass-through modulesTo provide fault tolerance, when a team is created iANS designatesone physical adapter in the team as the primary adapter andthe remaining adapters as secondary. If the primary adapter fails,a secondary adapter assumes the primary adapter’s duties. Thereare two types of primary adapter:• Default primary adapter: If the administrator does not specifya preferred primary adapter in the team, iANS chooses anadapter of the highest capability (based on model and speed)to act as the default primary adapter. If a failover occurs,a secondary adapter becomes the new primary. Once theproblem with the original primary is resolved, traffic does notautomatically restore to the original default primary adapter.The restored adapter will, however, rejoin the team as a secondaryadapter.• Preferred primary adapter: The administrator can specify apreferred adapter using the Intel PROSet tool. Under normalconditions, the primary adapter handles all traffic. The secondaryadapter receives fallback traffic if the primary fails. Ifthe preferred primary adapter fails but is later restored to anactive status, iANS automatically switches control back to thepreferred primary adapter.Teaming features include failover protection, traffic balancingamong team members, and bandwidth increases through aggregation.Note: Administrators can configure any team by using theIntel PROSet tool, which provides configuration capability for faulttolerance, load balancing, and link aggregation.Fault tolerance. Fault tolerance provides NIC or LOM redundancyby designating a primary adapter within the virtual adapterand utilizing the remaining adapters as backups. This feature isdesigned to ensure server availability to the network. When a primaryadapter loses its link, the intermediate driver fails traffic overto the secondary adapter. When the preferred primary adapter’slink is restored, the intermediate driver fails traffic back to theprimary adapter.Blade 1Blade 10Server enclosureGigabit Ethernet pass-through 1Ten 1000 Mbpspass-throughports Ten 1000 Mbpsinternal portsTen 1000 Mbpspass-throughports Ten 1000 Mbpsinternal portsGigabit Ethernet pass-through 2LOM 1LOM 2LOM 1LOM 2MidplaneBlade 1Blade 10www.dell.com/powersolutions Reprinted from <strong>Dell</strong> <strong>Power</strong> <strong>Solutions</strong>, February 2005. Copyright © 2005 <strong>Dell</strong> Inc. All rights reserved. POWER SOLUTIONS 25