SCALABLE ENTERPRISEThe Promise ofUnified I/O FabricsTwo trends are challenging the conventional practice of using multiple, specialized I/Ofabrics in the data center: server form factors are shrinking and enterprise applicationsare requiring more connections per server. However, the current practice of using multipleEthernet and Fibre Channel connections on each server to support fully redundant, clusterenabledcomputing environments inhibits scalability. Unified, high-performance I/O fabricscan enhance scalability by providing a single fault-tolerant connection. This approachallows legacy communication technologies to use one I/O “superfabric,” and the reductionin physical connections can help achieve better performance, greater flexibility, and lowertotal cost of ownership—the primary benefits of the scalable enterprise.BY J. CRAIG LOWERY, PH.D., AND DAVID SCHMIDTThe modern data center is a collection of server andstorage components partitioned into various cooperatinggroups that communicate over specialized networks. Inmost cases, the technologies behind these networks wereconceived decades ago to address particular kinds of traffic,such as user access, file transfer, and high-speed peripheralconnections. Over time, data centers incrementallyevolved to meet the increasing requirements of their environments,often retaining vestigial characteristics of repeatedlyrevamped technologies for backward compatibility andinteroperability. Although the standardization and stabilityenabled by backward compatibility and interoperabilityhave paved the way for the proliferation of computer systems,it is becoming increasingly difficult to extend theselegacy technologies to meet the fundamentally differentrequirements imposed by the scalable enterprise.For example, Ethernet—the de facto standard for localarea network (LAN) communication—began as a rathercumbersome bus architecture with performance limitationsimposed by the shared nature of its medium accesscontrol protocol. Today, Ethernet has become a muchfaster switched communication standard, evolving from10 Mbps to 100 Mbps to 1 Gbps. Yet, the remnants of itspast—the bus-based architecture and, in particular, theCarrier Sense Multiple Access with Collision Detection(CSMA/CD) protocol—introduce unnecessary overheadfor the sake of compatibility, making Ethernet less attractivefor protocols such as Remote Direct Memory Access(RDMA) than newer interconnects without the same historicalbaggage, such as InfiniBand.Other interconnect technologies, such as SCSI, PeripheralComponent Interconnect (PCI), and Fibre Channel,86POWER SOLUTIONS Reprinted from <strong>Dell</strong> <strong>Power</strong> <strong>Solutions</strong>, February 2005. Copyright © 2005 <strong>Dell</strong> Inc. All rights reserved. February 2005
SCALABLE ENTERPRISEhave followed a similar trajectory. In each case, the technology wascreated to solve a particular problem and, over time, has been extendedto increase both performance and scope of application.An unfortunate side effect of the proliferation of multipleinterconnect technologies is the requirement that they coexist insmaller and smaller spaces. High-density servers—such as rackdense,1U servers and blade servers—are required to provideuser experiences equivalentto larger systems such asthe traditional tower. At thesame time, emerging enterpriseapplications for thesehigh-density systems requirean increasing number of connections,or large fan-outs.Some clustering systems,such as high-availabilityclusters and clustered databases,require multiple LANconnections and two FibreChannel connections for afault-tolerant storage areaNow that the barriersto mass adoption have beenaddressed, unified I/O fabricsare set to revolutionizecomputing infrastructuresthrough their flexible,extensible architectures.network (SAN). Fitting four or more of these connections into ablade server’s form factor can be challenging.Another drawback of legacy interconnects is that they do notinherently encompass the fabric concept. A fabric functions muchlike a public utility: it is a multipurpose interconnect that is accessiblefrom virtually anywhere. The vision of the scalable enterprisedepends largely on fabric semantics—the model of communicationthat determines how enterprise applications “speak” within the datacenter that employs the fabric—because next-generation data centerswill likely be built using standard, disposable components thatplug in to the infrastructure as capacity is needed. Fabrics are thekey to this plug-and-play data center. Although some technologiessuch as Ethernet come closer than others such as SCSI to deliveringa fabric-like usage semantic, they still fall short in key areas,primarily by requiring additional unnecessary overhead to supporttheir legacy aspects. For example, using TCP/IP over Ethernet toperform RDMA significantly wastes bandwidth and is unnecessarilyslow for a high-performance computing cluster rack interconnect,because TCP’s sliding window protocol was designed for the unreliableInternet—not a single, well-controlled rack with a high-speedcommunication link.Heterogeneous legacy interconnects are also hindered by thesupport structure required to maintain data centers that incorporatethem. Today, IT support teams must staff skills in each interconnecttechnology. This redundancy is inefficient when compared tothe single-culture support required to maintain a unified fabric. Aunified fabric subsumes all communication functions through onefabric connection or—for redundancy—two fabric connections. Thefan-out problem can be resolved at the software level by multiplexingmultiple virtual interfaces over the single physical interface of aunified fabric. Some of these virtual interfaces may be designed toappear to higher layers of software as legacy technology interfacesto help provide transparency and backward compatibility.As the deficiencies of heterogeneous interconnects in the scalableenterprise intensify and the need for fabric semantics mounts,a clear gap arises that cannot be adequately filled by additionaliterations to refine older technologies or make them suitable andrelevant going forward. It is this need that the unified I/O fabric isdesigned to address.Understanding the requirements of unified I/O fabricsAny technology candidate that puts itself forward as being aunified I/O fabric technology must meet the following suitabilityrequirements:• Transparent coexistence: The fabric must be able to coexistand interoperate with legacy interconnect technologieswithout placing additional requirements on end users andapplications.• High performance: The fabric must be able to accommodatethe aggregate I/O that would otherwise have been distributedacross legacy interconnects. Nonblocking connectivity,throughput, and latency should be optimized so that the performanceof the unified fabric is the same as or better thanthe performance of multiple legacy networks.• Fault tolerance: The fabric must respond gracefully to componentfailures at both the legacy interconnect layer and itsown unified layer. Furthermore, to meet the requirement oftransparent coexistence, the fabric should support legacyfault-tolerance technologies.• Standardization: The fabric must conform to industrystandards to ensure competitive pricing, multiple sources forcomponents, longevity of the technology, and the creationof an attendant ecosystem. Ecosystem refers to all the companies,services, and ancillary products that must come intoexistence to make the technology viable and deployable on alarge scale.• Value pricing: The fabric must be less expensive to procureand maintain than an equivalent combination of legacyinterconnects.Unified I/O interconnects are not a new idea—proprietarysolutions have been developed and deployed with some successin targeted, custom environments. However, most effortsto date have not met all of the preceding requirements, usuallyfailing on transparency, standardization, and pricing. Recently,www.dell.com/powersolutions Reprinted from <strong>Dell</strong> <strong>Power</strong> <strong>Solutions</strong>, February 2005. Copyright © 2005 <strong>Dell</strong> Inc. All rights reserved. POWER SOLUTIONS 87