R&M Data Center Handbook
R&M Data Center Handbook
R&M Data Center Handbook
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
R&M <strong>Data</strong> <strong>Center</strong><br />
<strong>Handbook</strong>
www.datacenter.rdm.com<br />
R&M <strong>Data</strong> <strong>Center</strong><br />
<strong>Handbook</strong><br />
Preface<br />
The modern data center, along with the IT infrastructure, is the nerve center of an enterprise. Of key importance is<br />
not the data center's size but its capability to provide high availability and peak performance, 7 days a week, 365<br />
days a year.<br />
High-quality products alone are not enough to ensure continuous, reliable operation in the data center, nor is it<br />
enough just to replace existing components with high-performance products. Of much greater importance is an<br />
integrated and forward-looking planning process that is based on detailed analyses to determine specific<br />
requirements and provide efficient solutions. Planning is the relevant factor in the construction of new systems as<br />
well as for expansions or relocations. In all cases, the data cabling system is the bedrock of all communication<br />
channels and therefore merits special attention.<br />
<strong>Data</strong> centers can have different basic structures, which in some cases are not standard-compliant, and it is<br />
essential to apply case-specific relevant parameters to create an efficient and economically viable nerve center.<br />
Today more than ever, an efficient and reliable data center is a distinct competitive advantage. In addition to<br />
company-specific requirements, legal security standards and their impact need to be taken into account; these<br />
include directives like the SOX agreement, which provide the basic legal premises for data center planning.<br />
Furthermore, various standardization bodies are addressing cabling standards in data centers.<br />
This handbook is designed to provide you with important knowledge and to point out relationships between the<br />
multitude of challenges in the data center.<br />
© 08/2011 Reichle & De-Massari AG (R&M) Version 2.0<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 1 of 156
www.datacenter.rdm.com<br />
Table of Contents<br />
1. Introduction to <strong>Data</strong> <strong>Center</strong>s……………………………………………………………………….. 5<br />
1.1. Technical Terms…………………………………………………………………………………………………... 5<br />
1.2. General Information………………………………………………………………………………………………. 5<br />
1.3. Standards…………………………………………………………………………………………………………... 8<br />
1.4. Availability…………………………………………………………………………………………………………. 8<br />
1.5. Market Segmentation……………………………………………………………………………………………. 9<br />
1.6. Cabling Systems………………………………………………………………………………………………… 10<br />
1.7. Intelligent Cable Management Solutions…………………………………………………………………… 12<br />
1.8. Network Hardware and Virtualization……………………………………………………………………….. 12<br />
1.9. <strong>Data</strong> <strong>Center</strong> Energy Consumption…………………………………………………………………………… 13<br />
1.10. Energy Efficiency – Initiatives and Organizations………………………………………………………… 14<br />
1.11. Green IT…………………………………………………………………………………………………………… 15<br />
1.12. Security Aspects………………………………………………………………………………………………… 16<br />
2. Planning and Designing <strong>Data</strong> <strong>Center</strong>s………………………………………………………….. 20<br />
2.1. <strong>Data</strong> <strong>Center</strong> Types………………………………………………………………………………………………. 20<br />
2.1.1. Business Models and Services………………………………………………………………………………….. 20<br />
2.1.2. Typology Overview……………………………………………………………………………………………….. 21<br />
2.1.3. Number and Size of <strong>Data</strong> <strong>Center</strong>s……………………………………………………………………………… 22<br />
2.2. Classes (Downtime and Redundancy)………………………………………………………………………. 23<br />
2.2.1. Tiers I – IV…………………………………………………………………………………………………………. 23<br />
2.2.2. Classification Impact on Communications Cabling……………………………………………………………. 24<br />
2.3. Governance, Risk Management and Compliance…………………………………………………………. 26<br />
2.3.1. IT Governance…………………………………………………………………………………………………….. 27<br />
2.3.2. IT Risk Management……………………………………………………………………………………………… 28<br />
2.3.3. IT Compliance…………………………………………………………………………………………………….. 28<br />
2.3.4. Standards and Regulations……………………………………………………………………………………… 28<br />
2.3.5. Certifications and Audits…………………………………………………………………………………………. 33<br />
2.3.6. Potential Risks…………………………………………………………………………………………………….. 34<br />
2.4. Customer Perspective………………………………………………………………………………………….. 36<br />
2.4.1. <strong>Data</strong> <strong>Center</strong> Operators / Decision-Makers…………………………………………………………………….. 36<br />
2.4.2. Motivation of the Customer……………………………………………………………………………………… 39<br />
2.4.3. Expectations of Customers……………………………………………………………………………………… 40<br />
2.4.4. In-House or Outsourced <strong>Data</strong> <strong>Center</strong>………………………………………………………………………….. 43<br />
2.5. Aspects of the Planning of a <strong>Data</strong> <strong>Center</strong>…………………………………………………………………... 44<br />
2.5.1. External Planning Support……………………………………………………………………………………….. 44<br />
2.5.2. Further Considerations for Planning……………………………………………………………………………. 45<br />
3. <strong>Data</strong> <strong>Center</strong> Overview………………………………………………………………………………. 46<br />
3.1. Standards for <strong>Data</strong> <strong>Center</strong>s……………………………………………………………………………………. 46<br />
3.1.1. Overview of Relevant Standards……………………………………………………………………………….. 46<br />
3.1.2. ISO/IEC 24764……………………………………………………………………………………………………. 47<br />
3.1.3. EN 50173-5……………………………………………………………………………………………………….. 49<br />
3.1.4. TIA-942……………………………………………………………………………………………………………. 50<br />
3.2. <strong>Data</strong> <strong>Center</strong> Layout……………………………………………………………………………………………… 51<br />
3.2.1. Standard Requirements…………………………………………………………………………………………. 51<br />
3.2.2. Room Concepts…………………………………………………………………………………………………... 52<br />
3.2.3. Security Zones……………………………………………………………………………………………………. 53<br />
Page 2 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.3. Network Hierarchy………………………………………………………………………………………………. 54<br />
3.3.1. Three Tier Network………………………………………………………………………………………………. 54<br />
3.3.2. Access Layer……………………………………………………………………………………………………… 55<br />
3.3.3. Aggregation / Distribution Layer………………………………………………………………………………… 55<br />
3.3.4. Core Layer / Backbone………………………………………………………………………………………….. 56<br />
3.3.5. Advantages of Hierarchical Networks………………………………………………………………………….. 57<br />
3.4. Cabling Architecture in the <strong>Data</strong> <strong>Center</strong>……………………………………………………………………. 57<br />
3.4.1. Top of Rack (ToR)………………………………………………………………………………………………... 58<br />
3.4.2. End of Row (EoR) / Dual End of Row………………………………………………………………………….. 60<br />
3.4.3. Middle of Row (MoR)…………………………………………………………………………………………….. 61<br />
3.4.4. Two Row Switching………………………………………………………………………………………………. 62<br />
3.4.5. Other Variants…………………………………………………………………………………………………….. 62<br />
3.5. <strong>Data</strong> <strong>Center</strong> Infrastructure…………………………………………………………………………………….. 66<br />
3.5.1. Power Supply, Shielding and Grounding………………………………………………………………………. 66<br />
3.5.2. Cooling, Hot and Cold Aisles……………………………………………………………………………………. 69<br />
3.5.3. Hollow / Double Floors and Hollow Ceilings…………………………………………………………………… 72<br />
3.5.4. Cable Runs and Routing…………………………………………………………………………………………. 73<br />
3.5.5. Basic Protection and Security…………………………………………………………………………………… 75<br />
3.6. Active Components / Network Hardware…………………………………………………………………… 78<br />
3.6.1. Introduction to Active Components……………………………………………………………………………... 79<br />
3.6.2. IT Infrastructure Basics (Server and Storage Systems)……………………………………………………… 79<br />
3.6.3. Network Infrastructure Basics (NICs, Switches, Routers & Firewalls)……………………………………… 84<br />
3.6.4. Connection Technologies / Interfaces (RJ45, SC, LC, MPO, GBICs/SFPs)………………………………. 87<br />
3.6.5. Energy Requirements of Copper and Fiber Optic Interfaces………………………………………………... 89<br />
3.6.6. Trends……………………………………………………………………………………………………………… 90<br />
3.7. Virtualization……………………………………………………………………………………………………... 90<br />
3.7.1. Implementing Server / Storage / Client Virtualization………………………………………………………… 91<br />
3.7.2. Converged Networks, Effect on Cabling……………………………………………………………………….. 93<br />
3.7.3. Trends……………………………………………………………………………………………………………… 94<br />
3.8. Transmission Protocols……………………………………………………………………………………….. 95<br />
3.8.1. Implementation (OSI & TCP/IP, Protocols)……………………………………………………………………. 95<br />
3.8.2. Ethernet IEEE 802.3……………………………………………………………………………………………… 98<br />
3.8.3. Fibre Channel (FC)……………………………………………………………………………………………… 102<br />
3.8.4. Fibre Channel over Ethernet (FCoE)………………………………………………………………………….. 104<br />
3.8.5. iSCSI & InfiniBand………………………………………………………………………………………………. 105<br />
3.8.6. Protocols for Redundant Paths………………………………………………………………………………… 106<br />
3.8.7. <strong>Data</strong> <strong>Center</strong> Bridging……………………………………………………………………………………………. 108<br />
3.9. Transmission Media…………………………………………………………………………………………… 109<br />
3.9.1. Coax and Twinax Cables ………………………………………………………………………………………. 110<br />
3.9.2. Twisted Copper Cables (Twisted Pair)……………………………………………………………………….. 110<br />
3.9.3. Plug Connectors for Twisted Copper Cables………………………………………………………………… 113<br />
3.9.4. Glass Fiber Cables (Fiber optic)………………………………………………………………………………. 116<br />
3.9.5. Multimode, OM3/4 ……………………………………………………………………………………………… 116<br />
3.9.6 Single mode, OS1/2 ............................................................................................................................... 117<br />
3.9.7. Plug Connectors for Glass Fiber Cables……………………………………………………………………... 118<br />
3.10. Implementations and Analysis……………………………………………………………………………… 124<br />
3.10.1. Connection Technology for 40/100 gigabit Ethernet (MPO/MTP ® )……………………………………….. 126<br />
3.10.2. Migration Path to 40/100 gigabit Ethernet…………………………………………………………………… 129<br />
3.10.3. Power over Ethernet (PoE/PoEplus)…………………………………………………………………………. 135<br />
3.10.4. Short Links ……………………………………………………………………………………………………… 139<br />
3.10.5. Transmission Capacities of Class E A and F A Cabling Systems…………………………………………… 143<br />
3.10.6. EMC Behavior in Shielded and Unshielded Cabling Systems…………………………………………….. 148<br />
3.10.7. Consolidating the <strong>Data</strong> <strong>Center</strong> and Floor Distributors……………………………………………………… 155<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 3 of 156
www.datacenter.rdm.com<br />
4. Appendix……………………………………………………………………………………………..156<br />
References………………………………………………………………………………………………………. 156<br />
Standards………………………………………………………………………………………………………... 156<br />
All information provided in this handbook has been presented and verified to the best of our knowledge and in good faith.<br />
Neither the authors nor the publisher shall have any liability with respect to any damage caused directly or indirectly by the<br />
information contained in this book.<br />
All rights reserved. Dissemination and reproduction of this publication, texts or images, or extracts thereof, for any purpose and<br />
in any form whatsoever without the express written approval of the publisher is in breach of copyright and liable to prosecution.<br />
This includes copying, translating, use for teaching purposes or in electronic media.<br />
This book contains registered trademarks, registered trade names, and product names, and even if not specifically indicated as<br />
such, the corresponding regulations on protection apply.<br />
Page 4 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
1. Introduction to <strong>Data</strong> <strong>Center</strong>s<br />
According to Gartner, cloud computing, social media and context-aware<br />
computing will be shaping the economy and IT in the coming years. Cloud<br />
computing is already changing the IT industry today, and there will be more<br />
and more services and products entering the market. Outsourcing projects and<br />
subtasks and procuring IT resources and services by means of the cloud<br />
(World Wide Web) is becoming a standard process. The attraction of services<br />
provided by the cloud is also the result of their considerable cost and efficiency<br />
advantages as compared to traditional IT infrastructures. Server and storage<br />
virtualization help to optimize and concentrate IT infrastructures and save on<br />
resources.<br />
Consequently, high demands are placed on data center performance and availability, and server operations are<br />
required around the clock. A breakdown in a company's computer room, however, can prove fatal as well, and it<br />
always results in enormous costs and dissatisfied users and customers.<br />
1.1. Technical Terms<br />
There are no standard, consistently used terms in the data center world, and the companies' creativity is endless.<br />
We recommend always specifying what the supplier implies or what the customer expects.<br />
The terms<br />
data center – datacenter – data processing center – computer room – server room – server cabinet –<br />
collocation center – IT room – and more<br />
stand for the physical structure that is designed to house and operate servers. Interpretations may vary. The term<br />
campus (or MAN) is also used for data centers on a larger area.<br />
We define "data center" as follows:<br />
A data center is a building or premise that houses the central data processing equipment (i.e. servers and<br />
infrastructure required for operation) of one or several companies or organizations. The data center must<br />
consist of at least one separate room featuring independent power supply and climate control.<br />
Consequently, there is a distinction between a data center, and specific server cabinets or specific servers.<br />
1.2. General Information<br />
The data center is the core of a company; it creates the technical and infrastructural conditions required for<br />
virtually all business processes in a company. It secures the installed servers and components in use, protects<br />
them from external dangers and provides the infrastructure required for continued reliable operation. The physical<br />
structure provides protection against unauthorized access, fire and natural disasters. The necessary power supply<br />
and climate control ensure reliability.<br />
Accurate, economical planning plus the sustainable operation of a modern data center are steps companies need<br />
to take to meet the requirements of availability. The enormous infrastructural requirements involved lead many<br />
companies to opt for an external supplier and outsource their data center needs.<br />
Outsourcing companies (such as IBM, HP, CSC, etc.) offer their customers the use of data center infrastructures<br />
in which they manage the hardware as well as the system software. This offers the advantage of improved control<br />
of IT costs, combined with the highest security standards and the latest technology. In outsourcing, the systems<br />
and applications of a company are first migrated to the new data center under a migration project, and are then<br />
run in the data center of the outsourcing company.<br />
Along with data security, high availability and operational stability are top priorities.<br />
The data center must have the ability to run all the necessary applications, required server types and server<br />
classes. Due to the increasing number of real-time applications, the number of server variants is soaring as well.<br />
For the general performance of a server service it doesn’t matter if the service, i.e. the software, that is offered,<br />
• runs with other services on the same server;<br />
• is processed by a separate server computer or even;<br />
• if it is distributed to several servers because the load would otherwise be too great.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 5 of 156
www.datacenter.rdm.com<br />
File Server<br />
This last situation is especially common with public WWW servers, because the<br />
Internet sites that are frequented, such as search engines, large web shops or auction<br />
houses, generate a large quantity of data for processing. In this case, so-called loadbalancing<br />
systems come into play. They are programmed to distribute incoming<br />
queries to several physical servers.<br />
The following basic types of server services exist:<br />
A file server is a computer attached to a network that provides shared access to computer files for workstations<br />
that are connected to the computer network. This means that users can send each other files and share them via<br />
a central exchange server. The special feature offered by file servers is that using them is as transparent as using<br />
a local file system on workstation computers. Management of access rights is important, for not all users should<br />
have access to all the files.<br />
Print Server<br />
A print server or printer server allows several users or client computers shared access to a printer or printers. The<br />
biggest challenge is to automatically provide the client computer with the appropriate printer driver for their<br />
respective operating system so that they can use the printer without having to install the driver software locally.<br />
Mail Server<br />
A mail server, or message transfer agent (MTA), does not necessarily have to be installed on an Internet<br />
provider's server, but can operate on the local network of a company. First, it is often an advantage when<br />
employees can communicate with each other by e-mail, and second, it is sometimes necessary to operate an<br />
internal e-mail server just because Internet access is heavily restricted for security reasons, or communication<br />
between workstations and external mail servers is not allowed in the first place.<br />
Web Server<br />
The primary function of a web server (HTTP server) is to deliver web pages from a network to clients on request.<br />
Usually, this network is the public Internet (World Wide Web). This form of information transfer is becoming more<br />
and more popular in local company networks (intranet). The client uses a browser to display, request and explore<br />
web pages, and also to mouse-click to follow the hyperlinks contained within those pages, which are links to other<br />
websites, files, etc. on the same or a different server.<br />
Directory Services (DNS server)<br />
Directory services are becoming increasingly important in IT. A directory in this context is not a file system but an<br />
information structure, a standardized catalog of users, computers, peripherals and rights in a network. Information<br />
can be accessed network-wide via an entry in this directory, which means that directory services are a practical<br />
basis for numerous services, making work easier for administrators and life easier for users in large network<br />
environments. Here are some examples:<br />
• Automatic software distribution and installation<br />
• Roaming user profiles<br />
• Single sign-on services<br />
• Rights management based on computer, clients and properties<br />
Application Server<br />
An application server allows users on a network to share software applications on the server.<br />
• This can be a server in the LAN, which runs applications that are frequently used by clients (instead of<br />
pure file server services).<br />
• Or it runs the application/business logic of a program (e.g. SAP R/3 ® ) in a three-tier architecture (see<br />
section 3.3.1.), interacts with database servers and manages multiple client access.<br />
• Or it is a web application server (WAS) for web applications, dynamically generating HTML pages (or<br />
providing web services) for an Internet or intranet connection.<br />
Web services are automatic communication services over the Internet (or intranet) which can be found, interpreted<br />
and used through established and standardized procedures.<br />
Page 6 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Simple Application Services<br />
In an application server’s simplest form, it stores the database of an application. It is loaded onto the client's<br />
random access memory (RAM) and launched there. This process differs only slightly from a file server. The<br />
application just needs to know that any necessary additional components or configuration data are not located on<br />
the computer it launches from, but on the server from where it was loaded.<br />
This configuration makes sense when a high number of single-user application programs exist. It offers two<br />
advantages: first, it just requires one program installation on the server instead of several installations on each<br />
workstation, and second, costs are lower – usually only one software license per computer is required, for the<br />
computer that runs the program. Now, when an application is used by several computers, but not at the same time,<br />
the software can be installed on the server. It can be used by all computers and the licensing fee only needs to be<br />
paid once. And updating software is easier too.<br />
Client–Server Applications<br />
In the more complex forms of application servers, parts of a program – in some cases the entire program – are<br />
launched directly on the server. In the case of large databases, for example, the actual data and the basic data<br />
management software are usually stored on the server. The components on client computers are called front ends,<br />
i.e. software components that provide an interface to the actual database.<br />
Network<br />
High-performance, fail-safe networks are required in order to enable client-server as well as server-server<br />
communication. Investing in server technology is not very profitable if the network is not "qualitatively suited".<br />
Network components and transmission protocols are discussed in sections 3.6. and 3.8.<br />
Network Structures and Availability<br />
One way to increase the availability of a<br />
network is to design it for redundancy,<br />
meaning that several connections are<br />
provided to connect servers, and possibly<br />
storage systems, as well as network<br />
components to the network. This ensures<br />
that, in case of system, interface or<br />
connection failure, data traffic can still be<br />
maintained. This is of critical importance<br />
especially for the backbone, which is<br />
literally the backbone of the infrastructure.<br />
This is why data center classifications were defined to specify different levels of network reliability. These are<br />
discussed in greater detail in section 2.2.<br />
Access for Remote Users<br />
In addition to the local network, many enterprises have remote users that must be connected. There are two<br />
categories of remote users: those working at a remote regional company office and those with mobile access or<br />
working from home. Two company sites are usually connected by means of a virtual private network (VPN), also<br />
called a site-to-site scenario.<br />
A VPN connection can also be established between individual users and the company's head office, which would<br />
constitute a site-to-end scenario. An SSL-encrypted connection is sufficient for providing access for one single<br />
user if this is necessary.<br />
VPN Connections<br />
In VPN connections, an automatic VPN tunnel is established and network data is transported between the<br />
company network and the client using encryption. In general, VPN connections support any kind of network traffic,<br />
i.e. terminal service sessions, e-mails, print data, file transfers and other data exchange. To provide the user with<br />
fast access to the network resources, the transmission of the respective data packets must be prioritized in<br />
comparison with the rest of the network traffic. These requirements can be met with high-end routers. With<br />
regards to operational reliability of Wide Area Network (WAN) links, these connections should also be laid out<br />
redundantly.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 7 of 156
www.datacenter.rdm.com<br />
Storage Architectures<br />
A storage-consolidated environment offers the following advantages:<br />
• Better manageability (particularly for capacity extension etc.)<br />
• Higher availability (requiring additional measures)<br />
• Extended functions such as snapshot-ting, cloning, mirroring, etc<br />
SAN, NAS, iSCSI<br />
Different technologies exist for accessing central storage resources:<br />
A storage server conventionally holds a sufficient number of<br />
local hard disk drives (DAS: Direct Attached Storage).<br />
Even with high numbers of hard disks there are rarely any<br />
technical problems since modern servers can easily access<br />
several dozens hard disks thanks to RAID controllers<br />
(Redundant Array of Independent Disks).<br />
There is a trend emerging in data centers towards storage<br />
consolidation. This means that a storage network (SAN: Storage<br />
Area Network) is installed which includes one or several storage<br />
systems for the servers to store data.<br />
• SAN: Even though the term SAN, Storage Area Network, is basically used for all kinds of storage<br />
networks independent of their technology, it is generally used today as a synonym for Fibre Channel<br />
SAN. Fibre Channel is currently the most widely used technology for storage networking. It is discussed<br />
in greater detail in section 3.8.3.<br />
• NAS: Network Attached Storage, as the name suggests, is a storage system that can be connected<br />
directly to an existing LAN. NAS systems are primarily file servers. A point to mention is that there is no<br />
block-level access (exclusively assigned storage area) to a file share, so in general database files cannot<br />
be stored.<br />
• iSCSI: iSCSI, Internet Small Computer System Interface, allows block-level access to an accessible<br />
storage resource on the network as if it was locally connected through SCSI. Professional NAS systems<br />
support iSCSI access (e.g. through a Network Appliance), so that the storage areas of these systems<br />
can also be used by database servers. On the server side, iSCSI can be used with conventional network<br />
cards. It is a cost-effective technology that is spreading fast.<br />
However, the dominant technology in data center environments today is Fibre Channel, mainly because high-end<br />
storage systems have only been available with Fibre Channel connectivity so far.<br />
1.3. Standards<br />
The U.S. TIA-942 standard, the international ISO/IEC 24764 standard and the European EN 50173-5 standard<br />
define the basic infrastructure in data centers. They cover cabling systems, 19" technologies, power supply,<br />
climate control, grounding, etc., and also classify data centers into availability tiers.<br />
A detailed comparison of the different standards is provided in section 3.1.<br />
1.4. Availability<br />
Just a few years ago, there were still many enterprises which could have survived a failure of several hours of<br />
their IT infrastructure. Today, the number of companies who absolutely rely on continuous availability of their IT is<br />
very high and growing.<br />
A study carried out by the Meta Group showed that the damage caused by the breakdown of a company's key IT<br />
systems for more than 10 days can put the company out of business in the following three to five years – with a<br />
likelihood of 50 percent.<br />
Page 8 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
There are a number of classifications<br />
with regards to reliability and availability<br />
of data centers, for example, those<br />
issued by the Uptime Institute, BSI and<br />
BITKOM (overview in 2.1.2).<br />
The availability rating of a company's<br />
communications infrastructure is of<br />
crucial importance in developing, upgrading<br />
and analyzing an IT concept in<br />
today’s world. Consequently, the basic<br />
question is: "What is the maximum<br />
acceptable IT outage time for a<br />
company"<br />
These growing requirements for<br />
availability not only affect the IT<br />
infrastructure itself but also place high<br />
demands on the continuous securing of<br />
environmental conditions and supply.<br />
Redundancy in cooling and power<br />
supplies, multiple paths and<br />
interruption-free system maintenance<br />
have become standard for highly<br />
available IT infrastructures.<br />
Tier level Introduced Requirements<br />
Tier I<br />
Tier II<br />
Tier III<br />
In the 1960s<br />
In the 1970s<br />
End of the<br />
1980s<br />
Tier IV 1994<br />
Single path for power and cooling<br />
distribution, no redundant<br />
components, 99.671% availability<br />
Single path for power and cooling<br />
distribution, includes redundant<br />
components, 99.749% availability<br />
Multiple power and cooling<br />
distribution paths but with only one<br />
active path, concurrently<br />
maintainable, 99.982% availability<br />
Multiple active power and cooling<br />
distribution paths, includes<br />
redundant components, faulttolerant,<br />
99.995% availability<br />
Source: US Uptime Institute: Industry Standards Tier Classification<br />
Before planning and coordinating technical components so as to achieve the targeted availability, there are<br />
additional points regarding risk assessment and site selection that must be considered.<br />
These points include all possible site risks, be they of a geographical (air traffic, floods, etc.) or political (wars,<br />
conflicts, terrorist attacks, etc.) nature or determined by the surroundings (fire-hazards due to gas stations,<br />
chemical storages, etc.), which could have an effect on the likelihood of a potential failure. Furthermore, potentially<br />
criminal behavior of company employees and from outside the company should also be taken into consideration.<br />
Achieving high availability on the one hand presupposes an analysis of technical options and solutions, but it also<br />
requires the operator to plan and implement a comprehensive organizational structure, including the employment<br />
of trained service staff and the provision of spare parts and maintenance contracts. Precise instructions as to how<br />
to behave in case of failure or emergency are also part of this list.<br />
The term "availability" denotes the probability that a system is functioning as planned at a given point in time.<br />
1.5. Market Segmentation<br />
We can often read about Google’s gigantic data centers. However, Google is no longer the only company running<br />
large server farms. Many companies like to keep their number of servers a secret, giving rise to speculation. Most<br />
of these are smaller companies but there are also some extremely big ones among them.<br />
The following companies can assumed to be running more than 50,000 servers.<br />
• Google: There has been much speculation from the very beginning about the number of servers Google<br />
runs. According to estimates, the number presently amounts to over 500,000.<br />
• Microsoft: The last time Microsoft disclosed their number of servers in operation was in the second<br />
quarter of 2008, and that number was 218,000 servers. In the meantime, it can be assumed that as a<br />
result of the new data center in Chicago, that number has grown to over 300,000. Provided the Chicago<br />
data center stays in operation, the number of Microsoft servers will continue to increase rapidly.<br />
• Amazon: The company owns one of the biggest online shops in the world. Amazon reveals only little<br />
about their data center operations. However, one known fact is that they spent more than 86 million<br />
dollars on servers from Rackable.<br />
• eBay: 160 million people are active on the platform of this large online auction house, plus another 443<br />
million Skype users. This requires a gigantic data center. In total, more than 8.5 petabytes of eBay data<br />
are stored. It is difficult to estimate the number of servers, but it is safe to say that eBay is a member of<br />
the 50,000-server club.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 9 of 156
www.datacenter.rdm.com<br />
• Yahoo!: The Yahoo! data center is significantly smaller than that of Google or Microsoft. Yet, it can still<br />
be assumed that Yahoo! runs more than 50,000 servers to support their free hosting operation, the paid<br />
hosting services and the Yahoo! store.<br />
• HP/EDS: This company operates a huge data center. Company documentation reveals that EDS runs<br />
more than 380,000 servers in 180 data centers.<br />
• IBM: The area covered by the IBM data center measures 8 million square meters. This tells us that a<br />
very high number of servers is at work for the company and their customers.<br />
• Facebook: The only information available is that Facebook has one data center with more than 10,000<br />
servers. Since Facebook has over 200 million users storing over 40 billion of pictures, it can safely be<br />
assumed that the number of servers is higher than 50,000. Facebook has plans to use a data center<br />
architecture that is similar to Google.<br />
Google, eBay and Microsoft are known users of container data centers, which were a trend topic in 2010,<br />
according to Gartner. Other terms used to describe container-type data centers are:<br />
modular container-sized data center – scalable modular data center (SMDC) – modular data center<br />
(MDC) – cube – black box – portable modular data center (PMDC) – ultra-modular data center – data<br />
center in-a-box – plug-and-play data center – performance-optimized data center – and more<br />
<strong>Data</strong> center segmentation by center size in Germany, according to the German Federal Environment Agency<br />
(Bundesamt für Umwelt):<br />
<strong>Data</strong> center type<br />
Server<br />
cabinet<br />
Server<br />
room<br />
Small<br />
center<br />
Medium<br />
center<br />
Large<br />
center<br />
Total<br />
Total number of servers<br />
installed<br />
Percentage of server<br />
clusters<br />
160,000 340,000 260,000 220,000 300,000 1,280,000<br />
12.5 % 26.6 % 20.3 % 17.2 % 23.4 % 100 %<br />
Number of data centers 33,000 18,000 1,750 370 50 53,170<br />
Percentage of all data<br />
centers<br />
62.0 % 33.9 % 3.3 % 0.7 % 0.1 % 100 %<br />
<strong>Data</strong> <strong>Center</strong> Inventory in Germany, as of November 2010<br />
See section 2.1.2. for additional data center types.<br />
1.6. Cabling Systems<br />
The communications cabling system is essential to the availability of IT applications<br />
in data centers. Without a high-performance cabling system, servers, switches,<br />
routers, storage devices and other equipment cannot communicate with each other<br />
and exchange, process and store data.<br />
However, cabling systems have often grown historically and they are not fully capable<br />
of meeting today's requirements.<br />
This is because today's requirements for data centers are high:<br />
• High channel densities<br />
• High transmission speeds<br />
• Interruption-free hardware changes<br />
• Ventilation aspects<br />
• Support<br />
Careful, foresighted planning plus the structuring of communications cabling systems should therefore be high on<br />
the list of responsibilities for data center operators. Furthermore, related regulations such as Basel II and SOX<br />
(U.S.) stipulate consistently stringent transparency.<br />
Page 10 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
System Selection<br />
Due to the requirement of maximum high availability and ever increasing data transmission rates, quality demands<br />
on cabling components for data centers are considerably higher than on those for LAN products. Quality<br />
assurance thus starts in the early stages of planning, when selecting systems that meet the performance<br />
requirements listed here.<br />
• Cable design of copper and fiber optic systems<br />
• Bandwidth capacity of copper and fiber optic systems<br />
• Insertion and return loss budget of fiber optic systems<br />
• EMC immunity of copper systems<br />
• Migration capacity to next higher speed classes<br />
• 19" cabinet design and cable management<br />
Both fiber optic and copper offer the option of industrially preassembled cabling components for turnkey systems<br />
in plug-and-play installations. These installations provide the highest possible reproducible quality, very good<br />
transmission characteristics and high operational reliability. Given the high requirements for availability and<br />
operational stability, shielded systems are the best choice for copper systems.<br />
Global standards require a minimum of Class E A copper cabling. The choice of supplier is also something that<br />
requires careful consideration. The main requirements of a reliable supplier are – in addition to the quality of<br />
cabling components – professional expertise, experience with data centers and lasting supply performance.<br />
Structure<br />
<strong>Data</strong> centers are subject to constant change, as a result of the short life cycles of active components. To avoid<br />
having to perform major changes in the cabling system with the introduction of every new device, a well-structured,<br />
transparent physical cabling infrastructure that is separated from the architecture and that connects the various<br />
premises running the devices in a consistent, end-to-end structure is recommended.<br />
<strong>Data</strong> center layouts and cabling architecture are discussed in sections 3.2. and 3.4. respectively.<br />
Redundancy and Reliability<br />
The requirement of high availability necessitates the redundancy of connections and components. It must be<br />
possible to replace hardware without interrupting operation, and in case a link fails, an alternative path must take<br />
over and run the application without any glitches. Proper planning of a comprehensive cabling platform is<br />
therefore essential, taking into account factors like bending radii, performance reliability and easy and reliable<br />
assembly during operation.<br />
The availability of applications can be further increased by using preassembled cabling systems, which also<br />
reduces the amount of time installation staff need to spend in the data center's security area for both the initial<br />
installation as well as expansions or changes. This further enhances operational reliability.<br />
It is also important that all products be tested and documented under compliance with a quality management<br />
system.<br />
Like the data center cabling itself, connections used for communication between data centers (e.g. redundant data<br />
centers and back-up data centers) and for outsourcing and the backup storage of data at a different location, need<br />
to be redundant as well. The same applies to the integration of MAN and WAN provider networks.<br />
Installation<br />
In order to be qualified for installation and patching work in data centers, technicians need to be trained in the<br />
specifications of the cabling system.<br />
Suppliers like R&M provide comprehensive quality assurance throughout the entire value chain – from the<br />
manufacturing of components, through professional installation and implementation up to professional maintenance<br />
of the cabling system.<br />
Another advantage of the above-mentioned factory-assembled cabling systems is the time saved during<br />
installation. Moreover, when extending capacity by adding IT equipment, the devices can be cabled together<br />
quickly using preassembled cabling, thus ensuring that the equipment and the associated IT applications are in<br />
operation in the shortest time; the same applies to hardware.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 11 of 156
www.datacenter.rdm.com<br />
Cabinet systems with a minimum width of 800 mm are recommended with<br />
regard to the 19" server cabinet. They allow a consistent cable management<br />
system to be set up in a vertical and horizontal direction. Cabinet<br />
depth is usually determined by the passive and active components to be<br />
installed. A cabinet depth of 1000 mm to 1200 mm is recommended for<br />
the installation of active components.<br />
Ideally, a server cabinet has a modular structure, ensuring reliability at<br />
manageable costs. A modular cabinet can be dismantled or moved to a<br />
different location if needed. Modularity is also important with containment<br />
solutions and climate control concepts.<br />
Equally important is stability. Given the high packing densities of modern<br />
server systems and storage solutions incl. the power supply units, capacity<br />
loads of over 1000 kg are required for server racks. This means that<br />
cabinet floors and sliding rails must also be designed for high loads. Loads<br />
of 150 kg per floor tile or rail are feasible today.<br />
Cable management is another important aspect. With growing transmission<br />
speeds, it is absolutely essential to run power and data cables<br />
separately in copper cabling systems so as to avoid interference.<br />
Due to increasing server performance and higher packing density in racks, ventilation concepts such as perforated<br />
doors and insulation between hot and cold areas in the rack are more and more important. Further productivityimproving<br />
and energy-optimized solutions can be achieved by means of cold/hot aisle approaches, which are part<br />
of the rack solution.<br />
Documentation and Labeling<br />
A further essential prerequisite for a good cabling system administration and a sound planning of upgrades and<br />
extensions is meticulous, continuously updated documentation. There are numerous options available, from<br />
individual Excel lists to elaborate software-based documentation tools. It is absolutely essential that the documentation<br />
reflect the current state and the cabling that is actually installed at any given point in time.<br />
Related to documentation is the labeling of the cables; it should be unambiguous, easy to read and readable even<br />
in poor visibility conditions. Here too, numerous options exist, up to barcode-based identification labels. Which<br />
option is best depends on specific data center requirements. Maintaining uniform, company-wide nomenclature is<br />
also important. To ensure unambiguous cable labeling, central data administration is recommended.<br />
1.7. Intelligent Cable Management Solutions<br />
<strong>Data</strong> and power supply cables need to be planned in advance and their routing must be documented to allow<br />
immediate response to updates in concepts and requirements. The cable routing on cable racks and runs needs<br />
to be planned meticulously. Raised floors are often used for cable routing and allow maintenance work to be<br />
carried out in the data center without the need for any structural work. Further information appears in sections<br />
3.5.3. and 3.5.4.<br />
Cable penetration seals, however, often prove to be a weak point. Like walls, ceilings and doors, penetration seals<br />
need to fulfill all safety requirements with respect to fire, gas and water protection. To allow for swift and efficient<br />
changes and retrofittings in the cabling, they also need to be flexible.<br />
1.8. Network Hardware and Virtualization<br />
A comprehensive examination of a data center must include server security needs and the network. Many<br />
companies have already converted their phone systems to Voice over IP (VoIP), and the virtualization of servers<br />
and storage systems is spreading rapidly. Virtualized clients will be the next step.<br />
This development means that business-critical services will be run over data links, which also carry the power to<br />
the terminal devices via Power over Ethernet (PoE). Along with the growing importance of network technology to<br />
ensure uninterrupted business operations, security requirements are growing in this area too.<br />
The rack is the basis for dimensioning servers as well as components in network technology. Since active<br />
components are standardized to 19", network cabinets are usually based on the same platform. Requirements<br />
with regard to stability, fire protection and access control are comparable as well.<br />
Page 12 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Network infrastructures in buildings are generally designed to last for 10 years, therefore foresighted planning in<br />
the provisioning of network cabinets is recommendable. Accessories should be flexible to make sure that future<br />
developments can be easily integrated. In terms of interior construction, racks differ considerably.<br />
Due to the frequent switching at connection points, i.e. ports, of network components, cables in network cabinets<br />
need to be replaced more frequently than those in server cabinets. These processes, also called MACs (Moves,<br />
Adds, Changes) and the increasing density of ports make good cable management even more important. Top and<br />
bottom panels in the rack are subject to these demands as well. Easy cable entry in these areas make extensions<br />
easier and ensure short cable runs. Cable ducts and management panels ensure clean, fine distribution in the<br />
rack. In cable management, particular attention has to be paid to the stability of components. Minimum bending<br />
radii of cables must also be considered.<br />
Climate control is another aspect which is rapidly gaining in importance with network cabinets. Switches and<br />
routers are increasingly powerful, thereby producing more heat. It is important to allow for expansion when<br />
selecting climate control devices. They range from passive cooling through the top panels, venting devices,<br />
double-walled housings, ventilators, top cooling units and cooling units between the racks.<br />
Find further information on network hardware and virtualization in sections 3.6 and 3.7. respectively.<br />
1.9. <strong>Data</strong> <strong>Center</strong> Energy Consumption<br />
Greenpeace claims that at current growth rates, data centers and telecommunication<br />
networks will consume about 1’963 billion kilowatt hours of electricity in 2020, which<br />
would translate into a threefold increase within 10 years. According to Greenpeace,<br />
that’s more than the current electricity consumption of France, Germany, Canada and<br />
Brazil combined.<br />
The powerful emergence of Cloud Computing and the use of the Internet as an operating<br />
system and not just a way to access stored data are reported to generate an enormous<br />
increase in the energy needs of the IT industry.<br />
<strong>Data</strong> centers are bursting at their seams. In their studies, Gartner analysts predict that data centers will reach their<br />
limits in the next three years, with respect to energy consumption and space requirements. In their opinion, this is<br />
mostly due to technical developments such as new processors in multi-core design, blade servers and<br />
virtualization, which push density higher and higher. According to the analysts, density will increase at least<br />
tenfold in the next ten years, driving energy consumption for both operation and cooling to considerably higher<br />
levels. This is particularly true for data centers that have been in operation for a longer period of time and use<br />
outdated technologies. In some of these, as much as 70% of the energy consumption is required just for cooling.<br />
Energy consumption at server level<br />
DC/DC conversion<br />
AC/DC conversion<br />
Power distribution<br />
Uninterruptible power supply, UPS<br />
Cooling<br />
Building switchgear/transformer<br />
Total energy consumption<br />
Source: Emerson<br />
In 2003, the average energy<br />
consumption per server cabinet<br />
amounted to 1.7 kW, in 2006 it<br />
was already up to 6.0 kW and in<br />
2008 it was 8.0 kW. Today, energy<br />
consumption is 15 kW for a<br />
cabinet containing the maximum<br />
number of servers and 20 kW<br />
with the maximum number of<br />
blade servers. The overall energy<br />
consumption in a data center is<br />
shown in the pie chart right:<br />
1 watt<br />
+ 0.18 watt<br />
+ 0.31 watt<br />
+ 0.04 watt<br />
+ 0.14 watt<br />
+ 1.07 watt<br />
+ 0.10 watt<br />
2.84 watts<br />
The energy consumption of processors today amounts<br />
to approx. 140 watts, that of servers approx. 500<br />
watts and that of blade servers approx. 6300 watts.<br />
Each watt of computing performance requires 1 watt<br />
of cooling. The table on the left summarizes the<br />
actual energy consumption.<br />
Conclusion:<br />
1 watt IT savings = 3 watts of total savings!<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 13 of 156
www.datacenter.rdm.com<br />
1.10. Energy Efficiency – Initiatives and Organizations<br />
There are numerous measures for increasing energy efficiency in data centers. The chain of cause and effect<br />
starts with applications, continues with IT hardware and ends with the power supply and cooling systems.<br />
A very important point is that measures taken at the very beginning of this chain, namely for causes, are the most<br />
effective ones. When an application is no longer needed and the server in question is turned off then less power is<br />
consumed, losses in the uninterruptible power supply decrease and so does the cooling load.<br />
Virtualization – a way out of the trap<br />
Virtualization is one of the most effective tools for a cost-efficient green computing solution. By partitioning<br />
physical servers into several virtual machines to process applications, companies can increase their server<br />
productivity and downsize their extensive server farms.<br />
This approach is so effective and energy-efficient that the Californian utility corporation PG & E offers incentive<br />
rebates of 300 to 600 U.S. dollars for each server that is saved thanks to Sun or VMware virtualization products.<br />
These rebate programs compare the energy consumption of existing systems with that of systems in operation<br />
after virtualization. The refunds are paid once the qualified server consolidation project is implemented. They are<br />
calculated on the basis of the resulting net reduction in kilowatt hours (at a rate of 8 Cents per kilowatt hour). The<br />
maximum rebate is 4 million U.S. dollars or 50 percent of the project costs.<br />
By implementing a virtual abstraction level to run different operating systems and applications, an individual server<br />
can be cloned and thus used more productively. On the strength of virtualization, energy savings can be increased<br />
in practice by a factor of three to five – and even more in combination with a consolidation to high-performance<br />
multi-processor systems.<br />
Cooling – great potential for savings<br />
A rethinking process is also taking place in the area of cooling. The cooling system of a data<br />
center is turning into a major design criterion. The continuous increase in processor performance<br />
is leading to a growing demand for energy, which in turn leads to considerably higher<br />
cooling loads. This means that it makes sense to cool partitioned sections in the data center<br />
individually, in accordance with the specific way heat is generated in the area.<br />
The challenge is to break the vicious circle of an increased need of energy leading to more heat, which in turn has<br />
to be cooled, consuming a lot of energy. Only an integrated, overall design for a data center and its cooling<br />
system allows performance requirements for productivity, availability and operational stability to be synchronized<br />
with an energy-efficient use of hardware.<br />
In some data centers, construction design focuses more on aesthetics than on efficiency, an example being the<br />
hot aisle/cold aisle constructions. Water or liquid cooling can have an enormous impact on energy efficiency, and<br />
is 3000 times more efficient than air cooling. Cooling is further discussed in section 3.5.2.<br />
Key Efficiency Benchmarks<br />
Different approaches are available for evaluating the efficient use of energy in a data center.<br />
The approach chosen by the Green Grid organization works with two key benchmarks: Power<br />
Usage Efficiency (PUE) and <strong>Data</strong> <strong>Center</strong> Infrastructure Efficiency (DCIE). While PUE<br />
determines the efficiency of the energy used, the DCIE value rates the effectiveness of the<br />
energy used in data centers. The two values are calculated using total facility power and IT<br />
equipment power. The DCIE value is the quotient of IT equipment power and total facility<br />
power, and is the reciprocal of the PUE value. The DCIE thus equals 1/PUE and is expressed<br />
as a percentage.<br />
A DCIE of 30 percent means that only 30 percent of the energy is<br />
used to power the IT equipment. This would result in a PUE value of<br />
3.3. The closer this ratio gets to the value of 1, the more efficiently the<br />
data center uses its energy. Google can claim a PUE value of 1.21 for<br />
6 of their largest facilities.<br />
Total facility power includes the energy used to power distribution<br />
switch board, the uninterruptible power supply (UPS), the cooling<br />
system, climate control and all IT equipment, i.e. computers, servers,<br />
and associated communication devices and peripherals.<br />
PUE<br />
DCiE<br />
Level of<br />
efficiency<br />
3.0 33% very inefficient<br />
2.5 40% inefficient<br />
2.0 50% average<br />
1.5 67% efficient<br />
1.2 83% very efficient<br />
Page 14 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
In addition to the existing key benchmarks, the Green Grid is introducing two new metrics, whose goal is to<br />
support IT departments in optimizing and expanding their data centers. These new metrics are called Carbon<br />
Usage Effectiveness (CUE) and Water Usage Effectiveness (WUE). CUE will help managers determine the<br />
amount of greenhouse gas emissions generated from the IT gear in a data center facility. Similarly, WUE will help<br />
managers determine the amount of water used to operate IT systems.<br />
Initiatives and Organizations<br />
The Green Grid is an American industry association of suppliers of supercomputers and<br />
chips, dedicated to the concept of environment-friendly IT, also called green IT. The<br />
consortium was founded by the companies AMD, APC, Cisco, Dell, Hewlett Packard, IBM,<br />
Microsoft, Rackable Systems, Sun Microsystems and VMware, and its members are IT<br />
companies and professionals seeking to improve energy efficiency in data centers. The<br />
Green Grid develops manufacturer-independent standards (IEEE P802.3az Energy<br />
Efficient Energy Task Force), and measuring systems and processes to lower energy<br />
consumption in data centers.<br />
The European Code of Conduct on <strong>Data</strong> Centres Energy Efficiency was introduced by the European Commission<br />
in November 2008 to curb excessive energy consumption in data center environments. The code comprises a<br />
series of voluntary guidelines, recommendations and examples of best practices to improve energy efficiency.<br />
<strong>Data</strong> centers implementing an improvement program recognized by the EU commission may use the code's logo.<br />
Every year, data centers that are especially successful receive an award. In the medium-term, quantitative<br />
minimum requirements will also be defined.<br />
The SPEC Server Benchmark Test of the U.S. American Standard Performance Evaluation Corporation (SPEC)<br />
has kicked off the efficiency competition in the IT hardware section. Manufacturers are focusing on producing and<br />
testing configurations that are as efficient as possible.<br />
1.11. Green IT<br />
Green IT is about reducing operational costs while at the same time enhancing a data<br />
center's productivity. Targeted are measurable, short-term results. A top priority for IT<br />
managers in organizations is the efficiency of data centers.<br />
Financial service providers, with their enormous need of computing power, are not the<br />
only ones who can profit from green IT. Today, companies in all industrial sectors are<br />
paying much more attention to their electricity bills. According to a study by Jonathan<br />
Koomey, a professor at the Lawrence Berkeley National Laboratory of the Stanford<br />
University, energy costs for servers and data centers have doubled in the last five years.<br />
In 2010, the U.S. Environmental Protection Agency (EPA) projected that the electricity consumption of data<br />
centers would double again in the coming five years, generating costs of another 7.4 billion dollars per year.<br />
These bills are processed in the finance departments of the companies. The pressure is now on IT managers to<br />
reduce energy costs, which in turn results in environmental benefits.<br />
Along with the automotive and real estate industries, ICT is one of the key starting points for the improvement of<br />
climate protection. According to a study 1 by the Gartner group, ICT accounts for 2 to 2.5 percent of the global CO2<br />
emissions, about the same as the aviation industry. The study states that a quarter of these emissions are caused<br />
by large data centers and their constant need for cooling. In office buildings, IT equipment power normally<br />
accounts for over 20 percent, in some offices for over 70 percent, of the energy used.<br />
In addition to the energy demand for production and operation of hardware, i.e. computers, monitors, printers and<br />
phones, the materials used and production methods also need to be taken into account. This latter area includes<br />
the issue of hazardous substances, whether such substances are involved in the production or if any poisonous<br />
substances such as lead or bromine are contained in the end product and released during use or disposal. More<br />
detailed specifications are provided by the RoHS guideline of the EU (Restriction of Hazardous Substances).<br />
Gartner (2007) “Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions”<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 15 of 156
www.datacenter.rdm.com<br />
1.12. Security Aspects<br />
In many countries, legislation stipulates that IT systems are an integral part of corporate<br />
processes and thus no longer only a tool to advance a company's success, but an element<br />
that is bound by law and in itself part of the corporate purpose. Laws and regulations such as<br />
the German Supervision and Transparency in the Area of Enterprise Act (KonTraG), the Basel<br />
II Accord or the Sarbanes-Oxley-Act (SOX) almost entirely integrate the company's own IT into<br />
the main corporate processes. As a consequence, IT managers have to deal with significant<br />
liability risks. Companies and organizations find themselves facing IT-related complications<br />
such as customer claims for compensation, productivity losses and negative effects on the<br />
corporate image, to name but a few.<br />
The need to act and set up IT systems that are more secure and available is therefore a question of existential<br />
importance for companies. The first step is a comprehensive analysis phase to specify weaknesses in IT<br />
structures, in order to determine the exact requirements for IT security. The next step is the planning process,<br />
taking all potential hazards in the environment of the data center into consideration and providing additional<br />
safeguards if necessary. A detailed plan must be worked out in advance to avoid a rude awakening, defining room<br />
allocations, transport paths, room heights, cable laying routes, raised floor heights and telecommunications<br />
systems.<br />
Taking a broader view of IT security reveals that it goes beyond purely logical and technical security. In addition to<br />
firewalls, virus protection and storage concepts, it is essential to protect IT structures from physical damage.<br />
Regardless of the required protection class, ranging from basic protection to high availability with minimum<br />
downtime, it is essential to work out a requirement-specific IT security concept.<br />
Cost-effective IT security solutions are modular so that they can meet specific requirements in a flexible manner.<br />
They are scalable so that they can grow in line with the company and, most of all, they are comprehensive to<br />
ensure that in the event of any hazard, the protection is actually in place. It is therefore paramount to know the<br />
potential hazards beforehand, as a basis for implementing specific security solution.<br />
Fire Risk<br />
Only 20 percent or so of all fires start in the server room, or its environment. Nearly 80<br />
percent of fires start outside IT structures. This risk therefore needs to be examined on<br />
two levels. Protection against fire originating in the server room can be covered by early<br />
fire detector systems (EFD), a fire-alarm and extinguishing systems. These systems can<br />
also be designed redundantly – so that false alarms can be avoided. EFD systems suck<br />
ambient air from racks by means of active smoke extraction systems which detect even<br />
the smallest, invisible smoke particles. Digital particle counters, as used in laser<br />
technology, can also be applied here.<br />
Due to high air speeds in climate-controlled rooms, the smoke is rapidly diluted, meaning that the EFD systems<br />
must have a sufficiently high detection sensibility. Disturbances can be avoided or kept away with filters and<br />
intelligent signal-processing algorithms. Professional suppliers also offer these systems in a version that combines<br />
fire-alarm and extinguishing systems, which can be easily integrated in 19" server racks for space efficiency.<br />
Using non-poisonous extinguishing gases, fires can be put out in<br />
their pyrolysis phase (fire ignition phase), effectively minimizing<br />
potential damage and preventing the fire from spreading further.<br />
Extinguishing gas works much faster than foam, powder or<br />
water, causes no damage and leaves no residue. In modern<br />
systems, gas cartridges can even be replaced and activated<br />
without the help of technicians. In addition to FM-200 and noble<br />
gas (e.g. argon), nitrogen, inergen or carbon dioxide are also<br />
used to smother fires through oxygen removal. Then there are<br />
extinguishing gases which put out fires by absorbing heat, e.g.<br />
the new NovecTM 1230. Their advantage is that only small<br />
quantities are required.<br />
Fire in the data center<br />
In the evening of March 1, 2011, a fire<br />
broke out in the high-performance<br />
data center in Berlin. For security<br />
reasons, all servers were shut down<br />
and the entire space was flooded with<br />
CO2 to smother the fire!<br />
German Red Cross<br />
In addition to the use of extinguishing gases, the oxygen level can be reduced (called inertization) as a parallel<br />
process in fire-hazard rooms such as data centers. The ambient air is split into its individual components via an air<br />
decomposition system, reducing the oxygen level to approx. 15 percent – providing a means of early fire<br />
prevention. This oxygen reduction does not mean that people cannot enter the data center, as it is principally nonhazardous<br />
to the humans. Both EFD and fire-alarm and extinguishing systems are now available from leading<br />
manufacturers in space-saving, installation-friendly 1U versions, showing that effective protection is not a question<br />
of space.<br />
Page 16 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Water Risk<br />
An often neglected danger for IT systems is water. It is usually not a threat in the case<br />
of pipe leaks or floods, but represents a danger in the form of extinguishing water in<br />
the event of the above-discussed fire threats. The damage caused by the fire is often<br />
less severe than the damage caused by the water used to extinguish it.<br />
This means that IT rooms need to stay watertight during the fire-fighting process, and they must be resistant to<br />
long-standing stagnant water that, for example, occurs in a flood. Watertightness should be proven and<br />
independently certified to be EN 60529-compliant. Protection from stagnant water over a period of 72 hours is the<br />
current level of technology with high-availability systems. The latest developments allow data centers to be<br />
equipped with wireless sensors that are able to detect leaks early, send warning signals and automatically close<br />
doors if required. This is particularly important when highly efficient liquid-cooling systems are used for racks.<br />
Smoke Risk<br />
Even if a fire is not raging in the immediate vicinity of a data center, smoke still presents a risk of severe damage<br />
to IT equipment. In particular, burning plastics such as PVC create poisonous and corrosive smoke gases.<br />
Burning one kilogram of PVC releases approx. 360 liters of hydrochloric acid gas, producing up to 4500 cubic<br />
meters of smoke gas. This can destroy the IT facility in little time, a fact that reduces the "mean time between<br />
failure" (MTBF) substantially. MTBF is the predicted elapsed time between inherent failures of a system during<br />
operation.<br />
Reliable protection can only be provided by hermetically-sealed server rooms which are able to resist these<br />
hazardous gases and protect their valuable components against the threat. Smoke gas resistance that is tested in<br />
accordance with EN 18095 is essential. In Germany, the level of water and gas resistance is defined based on the<br />
IP categorization. A data center should have a protection class of IP56.<br />
Power Supply Risk<br />
Uninterruptible power supply systems, called UPS systems, take over when the power network<br />
fails. Modern UPS systems (online systems) operate continually, supplying consumers via their<br />
power circuits. This means that the brief yet dangerous change-over time can be eliminated.<br />
The UPS system simply and reliably bridges the time until the power network is up and running<br />
again. Thanks to integrated batteries, UPS systems also operate continuously if power is off for<br />
a longer period of time. UPS systems are classified in accordance with EN 50091-3 and EN<br />
62040-3 VFI. For a reliable breakdown protection, equipment used in data centers should fulfill<br />
the highest quality class 1 VFI-SS-111.<br />
UPS systems are divided into the single-phase group and the multi-phase group of 19" inserts and stand-alone<br />
units of different performance classes. The units provide perfect sinus voltage and effectively balance out voltage<br />
peaks and "noise". Particularly user-friendly systems can be extended as required and retrofitted during operation.<br />
If, however, the power supply network stays offline for several hours, even the best batteries run out of power.<br />
This is where emergency power systems come into play. These are fully self-sufficient systems that independently<br />
generate the power needed to keep the data center running and recharge the batteries in the UPS system. These<br />
emergency systems are usually diesel engines that start up in the event of a power failure once the power supply<br />
has been taken over by the UPS systems.<br />
New research indicates that in the future these diesel engines could be driven with resource-conserving fuels such<br />
as vegetable oil, similar to a cogeneration unit (heat and power plant). This would mean that the units can<br />
continuously generate power in an environmentally-friendly manner without additional CO2 emissions, and this<br />
power could even be sold profitably when it is not required for the data center operation. Fuel cells will also<br />
become increasingly important as power sources for emergency power systems. Fuel cells reduce total cost of<br />
ownership (TCO) and offer distinct advantages over battery-buffered back-up systems, with respect to service life,<br />
temperature fluctuations and back-up times. In addition, they are very ecological because they generate pure<br />
water as reaction product.<br />
Air-Conditioning Risk<br />
Modern blade server technologies boost productivity in data centers. Climate-control<br />
solutions are primarily concerned with transporting the heat emitted in the process. <strong>Data</strong><br />
center planners must remember that every productivity boost also increases the demand for<br />
cooling capacity of the air-conditioning units in a data center. With maximum thermal loads of<br />
800 W/m 2 in a data center, cooling units that are suspended from the ceiling or mounted to<br />
walls can be used. With thermal loads above 800 W/m 2 , however, the data center requires<br />
floor-mounted air-conditioning units which blow the air downwards into the raised floor.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 17 of 156
www.datacenter.rdm.com<br />
In general, air-conditioning units can be placed inside or outside a data center. Inside use is better for rack-based<br />
cooling or specific cooling of hot spots in a server room. Operational costs are lowered and the associated noise<br />
stays inside the room. Units are also protected from unauthorized access and the protective wall is not weakened<br />
by additional openings for ventilation slides. The advantages of outside use are that service technicians do not<br />
need to enter the room for maintenance work, the air-conditioning units require no extra space in the data center<br />
and the fire load is not increased by the air-conditioning system. Fresh air supply can generally be achieved<br />
without additional costs.<br />
In most cases, redundancy in accordance with Tier III and IV (see section 2.2.1.) is only required for airconditioning<br />
units on walls and ceilings that have a rather low cooling capacity. When higher capacities and thus<br />
stand-alone units are required, a "N+1 redundancy" in accordance with Tier II should be established, which leaves<br />
a number of units in continuous operation while an additional unit acts as redundancy (reserve).<br />
In order to establish and maintain the recommended relative humidity range of 40% to 60%, the air-conditioning<br />
units should feature both air humidifiers and dehumidifiers. A safe bet is to opt for a system certified by Eurovent<br />
(the interest group of European manufacturers of ventilation and air-conditioning systems). For cooling hot spots<br />
in data centers, the use of liquid-cooling packages is also an option. These packages extract the emitted heat/hot<br />
air along the entire length of the cabinet by means of redundant, high-performance fans, discharging it via an air<br />
and water heat exchanger into a cold water network or a cooler.<br />
Dust Risk<br />
Dust is the natural enemy of sensitive IT systems and does not belong in secure data centers. Fine dust particles<br />
can reduce the life cycle of ventilators and other electronic units drastically. One main source of dust is<br />
maintenance work and staff – any intrusion into secured data centers must be avoided. An intelligent IT room<br />
security system is absolutely dust-free. The dust-free policy also applies to extension or upgrade work. The dusttightness<br />
in place should comply with the specifications in EN 60529, and fulfill IP56 with characteristic 1 (see<br />
water risk) in order to avoid any unpleasant surprises at a later stage.<br />
Unauthorized Access Risk<br />
The data center is one of the most sensitive areas in a company. It is critical that only<br />
authorized persons have access, and that every data center access is documented. A<br />
study of the International Computer Security Association (ICSA) showed that internal<br />
attacks on IT systems are more frequent than external ones. Protection of the data center<br />
must therefore first meet all requirements in terms of protection against unauthorized<br />
access, sabotage and espionage, and also ensure that specifically authorized persons<br />
can only enter the rooms they need to perform their defined tasks. Burglary protection in<br />
accordance with EN 1627 with resistance class III (RCIII) is easily implemented. All<br />
processes are to be monitored and recorded in accordance with the relevant documenttation<br />
and logging regulations.<br />
If possible, the air conditioning system and electrical equipment should be physically separated from the servers<br />
so that they can be serviced from the outside. For access control purposes, biometric or standard access control<br />
solutions can be installed or a combination of the two. Biometric systems in combination with magnetic card<br />
scanners enhance the security level considerably. Most of all, the access control solution installed should fulfill the<br />
specific requirements of the operator. The highest level of security can be guaranteed by the new vein recognition<br />
technology. Its high precision and thus critical advantage stands out in its false acceptance rate of less than<br />
0.00008 percent and a false rejection rate of only 0.01 percent. Moreover, it ensures highly hygienic handling,<br />
since operating the device does not require direct contact.<br />
By using video surveillance systems with image sensors in CCD and CMOS technology, up to 1000 cameras can<br />
be managed with the matching software, regardless of the manufacturer. Video surveillance systems provide<br />
transparency, monitoring and reliability in data centers. Advanced video management technology enables modern<br />
surveillance systems to manage and record alarm situations. For images to be analyzed and used as evidence,<br />
an intelligent system must provide the proper interfaces and processing possibilities.<br />
Explosion Risk<br />
The risk of terror attacks or other catastrophes which could trigger an explosion must be factored in right from the<br />
start when planning a highly available security room concept. Modern, certified server rooms need to undergo an<br />
explosion test in accordance with the SEAP standard. High-security modular server rooms are built with pressureresilient<br />
wall panels to resist heavy explosions, protecting valuable IT systems from irreparable damage. IT<br />
systems need also be protected against debris and vandalism, to ensure real all-round protection.<br />
Page 18 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Future Viability<br />
The long-term planning of communications structures in these times of short product life cycles<br />
and continually growing demands on IT systems is becoming increasingly complex for many<br />
companies. Future developments need to be taken into account when planning a data center.<br />
Will the data center become larger or smaller in the future Will it require a new site and how<br />
can an existing data center be made secure during operation<br />
Are there any options for setting up a data center outside company grounds or for carrying out a complete<br />
relocation without having to go through an expensive dismantling of the components A partner with professional<br />
experience in building a data center is at the customer's side right from the beginning of the project and does not<br />
abandon them once the center is completed, but provides long-term support to the company. Networks in data<br />
centers will undergo significant changes in the coming years, since corporate IT itself is being restructured. Trends<br />
such as virtualization and the resulting standardization of infrastructures will present a challenge to the network<br />
architectures in use today.<br />
Flexibility and Scalability<br />
In order for data center operators to benefit from both high flexibility and investment security,<br />
they need to be able to choose the suppliers of secure data centers on the basis of valid<br />
criteria. An important contribution to that end is certificates from independent test organizations.<br />
An external inspection of the construction work during and after construction is also important.<br />
Scalable solutions are indispensable for an efficient use of data center infrastructures. Only<br />
suppliers who fulfill this requirement should be included in the planning process.<br />
A well-planned use of building and office space can minimize the risk of a total system failure by means of a<br />
decentralized arrangement of data center units. Using existing building and office space intelligently does not<br />
necessarily involve new constructions. Modular security room technologies also allow for decentralized installation<br />
of secure server rooms. Thanks to this modularity, they can be integrated into existing structures cost-effectively,<br />
and they can be easily extended, changed or even relocated if necessary. This room structuring, designed to meet<br />
specific needs and requirements, translates into considerable cost savings for the operator. Security rooms can<br />
often be rented or leased, making it easy to carry out short-term extensions as well.<br />
Maintenance<br />
Regardless of the size of an enterprise, the importance of the availability of IT systems is<br />
constantly growing. This is why maintenance work is also necessary once a secure data<br />
center is completed. Continuous, documented maintenance and monitoring of IT structures at<br />
defined intervals is indispensable in this day and age, yet it is often neglected. A long-term<br />
service concept must be established, one that is effective even when a failure does not occur.<br />
Only a robust concept of required services that is adapted to specific data center<br />
requirements provides all-around security for data centers.<br />
Maintenance, service and warranties for each aspect, i.e. climate control, fire-alarm and early fire detections<br />
system, UPS systems, emergency power systems, video surveillance and access control, need to be covered in<br />
"all-round worry-free package" that also incorporates the physical and power environment of IT structures. A wide<br />
range of services is available, covering specific requirements. Case-by-case evaluation is required to determine<br />
whether a company needs full technical customer service, including monitoring and 24/7 availability, or a different<br />
solution.<br />
Remote Surveillance<br />
Special remote surveillance tools allow for external monitoring and control of all data center functions. In the event<br />
of an alarm, a pre-determined automatic alarm routine is activated without delay. This routine might be an optical<br />
or acoustic signal sent as a message through an interface to the administrator or a defined call center. In addition,<br />
these tools control the fire extinguishing systems and can be programmed to activate other measures in the alarm<br />
sequence plan. Color displays at racks expressing the status of a system allow for a visual inspection of system<br />
states, for example via a web cam.<br />
Integrated Approach<br />
A consultation process on data centers, taking into account the entire corporate structure, follows an integrated<br />
approach. It is obvious that a comprehensive risk analysis combined with the analysis and objective assessment<br />
of existing building structures at the locations in question is a basic requirement. The entire planning process,<br />
including a customer requirement specification, are integral elements of an offer. This allows the optimal solution<br />
for each business to realize a secure data center that meets all requirements.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 19 of 156
www.datacenter.rdm.com<br />
2. Planning and Designing <strong>Data</strong> <strong>Center</strong>s<br />
The volumes of data that must be processed are constantly increasing, demanding higher productivity in data<br />
centers. At the same time, data center operators need to keep their operational costs in check, improve profitability<br />
and eliminate sources of error that are potentially causing failures. Statistics show that many failures occurring<br />
in data centers are caused by problems in the passive infrastructure or human error. Increasing availability starts<br />
with quality planning of the infrastructure and the cabling system.<br />
2.1. <strong>Data</strong> <strong>Center</strong> Types<br />
The data center sector is teeming with technical terms describing the various business models and services<br />
offered by data center operators. This section lists definitions of the key terms.<br />
2.1.1. Business Models and Services<br />
The term data center does not reveal what type of business model an operator runs and what type of data center<br />
facility it is.<br />
The terms<br />
Housing – Hosting – Managed Services – Managed Hosting – Cloud Services – Outsourcing – and<br />
others<br />
describe the services offered. What they have in common is that the operator provides and manages the data<br />
center's infrastructure.<br />
However, not every supplier of data center services actually runs his/her own data center. There are different<br />
business models in this area as well:<br />
• The supplier is renting space from a data center operator;<br />
• He acts as reseller for this data center operator;<br />
• He is renting the space according to need.<br />
In the case of housing services, the operator provides the rented data center space plus cabling. The customer<br />
usually supplies his own servers, switches, firewalls, etc. The equipment is managed by the customer or a hired<br />
third party.<br />
In the case of hosting services, the web host provides space on a server or storage space for use by the<br />
customer, and he/she also provides Internet connectivity. The services include web hosting, share hosting, file<br />
hosting, free hosting, application service provider (ASP), Internet service provider (ISP), full service<br />
provider (FSP), cloud computing provider, and more.<br />
In both managed services and managed hosting, the contracting parties agree on specific services on a caseby-case<br />
basis. All conceivable combinations are possible; data center operators running the clients' servers,<br />
including the operating system level, or data center operators providing the servers for their clients, etc.<br />
Even less clearly defined are the cloud services. Since they are a trend topic, nearly every company is offering<br />
cloud services, for example SaaS (Software as a Service), IaaS (Infrastructure as a Service), PaaS (Platform as a<br />
Service) or S+S (Software plus Service). It is important to first establish the specific components of a product<br />
portfolio.<br />
Pure housing services qualify as outsourcing, as does relocating<br />
the entire IT operation to be run by a third-party<br />
company.<br />
Areas<br />
The terms<br />
collocation – open collocation – cage – suite – room<br />
– rack – cabinet – height unit – and others<br />
refer to the space rented in a data center.<br />
Page 20 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
• Collocation (also spelled colocation) stands for a space or a room in a data center rented for the<br />
customer's own IT equipment. The term open collocation usually refers to a space shared by several<br />
customers.<br />
• Cage – suite – room refer to a separate room or lockable section in a data center. This separate section<br />
is reserved for the exclusive use of the customer's IT equipment. The section's design depends on the<br />
operator but customers usually decide on matters such as cabling and infrastructure in these sections.<br />
• Rack – cabinet: These terms are used interchangeably. Depending on the data center operator and the<br />
rented space, the customer can decide on the models to use. Sometimes ¼ and ½ racks are available.<br />
Particularly in hosting services, the number of height units is also relevant.<br />
2.1.2. Typology Overview<br />
There is no information available on the total number of data centers worldwide and their differentiation and<br />
classification in terms of types and sizes.<br />
Different authors and organizations classify data centers into different types, but a globally applicable, consistent<br />
categorization does not yet exist. The approaches differ with regard to a data center’s purpose. They are:<br />
Typology based on statistics<br />
This typology has been applied to provide information about the energy consumption of data centers (see US-EPA<br />
2007, TU Berlin 2008, Bailey et al. 2007). US-EPA uses the following terms, among others: server closet, server<br />
room, localized data center, mid-tier data center, enterprise-class data center.<br />
The German Federal Environment Agency further developed and refined the approach in a study of resources and<br />
energy consumption, published in November 2010. Some of the figures are presented below in section 2.1.3.<br />
Typology based on data center availability<br />
Reliability or availability is an essential quality criterion of a data center. There are a number of data center<br />
classifications based on availability (BSI2009, Uptime Institute 2006, BITKOM 2009, and others).<br />
The following table presents some examples – for each level there are guidelines for data centers operators on<br />
how to achieve it.<br />
VK 0 ~59% approx. 2-3 weeks/year no requirements<br />
Availabilityclasses<br />
BSI GE<br />
VK 1 99% 87.66 hours/year normal availability<br />
VK 2 99.9% 8.76 hours/year high availability<br />
VK 3 99.99% 52.6 minutes/year very high availability<br />
VK 4 99.999% 5.26 minutes/year highest availability<br />
VK 5 99.9999% 0.526 minutes/year disaster tolerant<br />
Tier classes<br />
US Uptime<br />
Institute<br />
Tier I 99.671% 28.8 hours/year<br />
Tier II 99.749% 22.0 hours/year<br />
Tier III 99.982% 1.6 hours/year<br />
Tier IV 99.995% 24 minutes/year<br />
<strong>Data</strong> center<br />
categories<br />
BITKOM GE<br />
A<br />
B<br />
C<br />
D<br />
E<br />
72 hours/year<br />
24 hours/year<br />
1 hour/year<br />
10 minutes/year<br />
0 minutes/year<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 21 of 156
www.datacenter.rdm.com<br />
Typology based on data center purpose<br />
In many cases, data centers are classified on the basis<br />
of the underlying business model, e.g. housing and<br />
collocation data centers, high-performance data centers,<br />
and others.<br />
Typology based on operator type<br />
Suppliers of data centers often categorize data centers<br />
by their operators, e.g. by industry sector (bank,<br />
automobile, research, telecommunication, etc.) or in<br />
terms of public authorities or private companies.<br />
Another categorization distinguishes between enterprise<br />
data centers and service data centers.<br />
• Enterprise <strong>Data</strong> <strong>Center</strong>s (company-owned)<br />
<strong>Data</strong> centers performing all IT services for a<br />
company and belonging to that company (be it<br />
in the form of an independent company or a<br />
department).<br />
• Service <strong>Data</strong> <strong>Center</strong>s (service providers)<br />
<strong>Data</strong> centers specialized in providing data<br />
center services for third parties. Typical<br />
services are housing, hosting, collocation and<br />
managed services, and exchange services for<br />
the exchange of data.<br />
Enterprise data centers are being increasingly offered as<br />
service data centers (service providers) on the market,<br />
particularly by public utility companies.<br />
Company<br />
Number of<br />
data centers<br />
In Germany<br />
In Switzerland<br />
Outside<br />
Europe<br />
Housing<br />
Artmotion 1 x x x<br />
COLT >5 x x<br />
data11 1 x x<br />
Hosting<br />
<strong>Data</strong>burg 1 x x<br />
datasource<br />
1 x x x<br />
Switzerland<br />
Equinix >90 x x x x<br />
e-shelter 7 x x x<br />
Global Switch 8 x x x<br />
green.ch 4 x x x<br />
Hetzner >3 x x x<br />
Host-Europe 2 x x x<br />
I.T.E.N.O.S. >8'000 x x x<br />
interXion 28 x x x x<br />
Layerone 1 x x<br />
Level 3 >200 x x x x<br />
Strato 2 x x<br />
Telecity 8 x x x<br />
telemaxx 3 x x<br />
Selection of housing and hosting service providers, 2011<br />
2.1.3. Number and Size of <strong>Data</strong> <strong>Center</strong>s<br />
The above-mentioned study by the Federal Environment Agency included a classification for the German market.<br />
The figures and data in the following table were taken from that study.<br />
<strong>Data</strong> center type<br />
Server cabinet Server room<br />
Small data<br />
center<br />
Mid-size data<br />
center<br />
Large data<br />
center<br />
Equipment<br />
Ø number of servers 4.8 19 150 600 6,000<br />
Ø total power 1.9 kW 11 kW 105 kW 550 kW 5,700 kW<br />
Ø floor size 5 m 2 20 m 2 150 m 2 600 m 2 6,000 m 2<br />
Ø network<br />
Copper 30 m 170 m 6,750 m 90,000 m 900,000 m<br />
Fiber optic 10 m 1,500 m 12,000 m 120,000 m<br />
Number of data<br />
centers in Germany<br />
Total<br />
2008 33,000 18,000 1,750 370 50 53,170<br />
Extrapolation for 2015<br />
Business as usual 34,000 17,000 2,150 680 80 53,910<br />
Green IT 27,000 14,000 1,750 540 75 43,365<br />
Page 22 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
2.2. Classes (Downtime and Redundancy)<br />
The best-known classification system defines availability in data centers for both physical structures and technical<br />
facilities and was developed by the Uptime Institute. The Institute was founded in 1993, is based in Santa Fe<br />
(Mexico) and numbers approx. 100 members. The tier classes I to IV define the probability that a system will be<br />
functional over a specified period of time. SPOF (Single Point of Failure) refers to the component of the system,<br />
whose failure causes the entire system to collapse. There can be no SPOF in a high-availability system.<br />
2.2.1. Tiers I – IV<br />
Among other factors, the tier classification is based on the elements of a data center's infrastructure. The lowest<br />
value of the individual element (cooling, power supply, communication, monitoring, etc.) determines its overall<br />
evaluation. Also taken into account is the sustainability of measures, operational processes and service assurance.<br />
This is particularly evident in the transition from Tier II to Tier III, where the alternative supply path allows<br />
maintenance work to be performed without interfering with the operation of the data center, which in turn is<br />
manifested in the MTTR value (Mean Time to Repair).<br />
Tier requirements TIER I TIER II TIER III TIER IV<br />
Distribution paths<br />
1 active / 1<br />
1 1<br />
power and cooling<br />
alternate<br />
2 active<br />
Redundancy active<br />
components<br />
N N+1 N+1 2 (N+1)<br />
Redundancy backbone no no yes yes<br />
Redundancy horizontal<br />
cabling<br />
no no no optional<br />
Raised floors 12“ 18“ 30“-36“ 30“-36“<br />
UPS / generator optional yes yes dual<br />
Concurrently<br />
maintainable<br />
no no yes yes<br />
Fault tolerant no no no yes<br />
Availability 99.671% 99.749% 99.982% 99.995%<br />
N: Needed Source: Uptime Institute<br />
Tier I<br />
This class is appropriate for smaller companies and start-ups. There is no need for extranet applications, Internet<br />
use is mainly passive and availability is not a top priority.<br />
A Tier I data center basically has non-redundant capacity components and single non-redundant distribution<br />
networks. An emergency power supply, uninterruptible power supply and raised floors are not required,<br />
maintenance and repair work are scheduled and failures of site infrastructure components will cause disruption of<br />
the data center.<br />
Tier II<br />
This is the class for companies that have already moved part of their business processes online, mostly during<br />
standard office and business hours. In the event of non-availability there will be delays but no data loss. There are<br />
no business-critical delays (non-time critical backups).<br />
A Tier II data center has redundant capacity components (N+1) and single non-redundant distribution networks.<br />
An uninterruptible power supply and raised floors are required.<br />
Tier III & IV<br />
These are the classes for companies that use their IT equipment for internal and external electronic business<br />
processes and require 24-hour availability. Maintenance work or shut-down times have no damaging effects. Tier<br />
III and IV are the basis for companies operating in e-commerce, electronic market transactions or financial<br />
services (high-reliability security backups).<br />
A Tier III data center has redundant capacity components and multiple distribution networks that serve the site's<br />
computer equipment. Generally, only one distribution network serves the computer equipment at any time.<br />
A fault-tolerant (Tier IV) data center has redundant capacity components and multiple distribution networks<br />
simultaneously serving the site's computer equipment. All IT equipment is dual powered and installed properly so<br />
as to be compatible with the topology of the site's architecture.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 23 of 156
www.datacenter.rdm.com<br />
2.2.2. Classification Impact on Communications Cabling<br />
To ensure availability and reliability of the communications network in a data center, a redundant cabling configuration<br />
is required, in accordance with the respective Tier level.<br />
<strong>Data</strong> center layout is described in section 3.2. and cabling architecture in 3.4. The transmission media (copper<br />
and fiber optic) used for the specific areas are discussed in section 3.9.<br />
Backbone cabling, horizontal cabling and active network components are not redundant. The set-up is a simple<br />
star topology; network operation can thus be interrupted. However, data integrity must be ensured.<br />
Here too, backbone cabling and horizontal cabling are not redundant, but the network components and their<br />
connections are. This set-up is a star topology as well; network operation can only be interrupted at specified<br />
times or only minimally during peak hours of operation.<br />
Page 24 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Both backbone cabling and active network components are redundant in this star topology since network operation<br />
must be maintained without interruption within defined period of times and during peak hours of operation.<br />
Backbone cabling and all active components such as servers, switches, etc., as well as the uninterruptible power<br />
supply and the emergency power generator are redundant, 2 x (N+1). Horizontal cabling may also be redundant.<br />
Uninterrupted system and network functions must be ensured 24/7. This Tier level is completely fault-tolerant,<br />
meaning that maintenance and repair work can be performed without interrupting operation. There is no SPOF<br />
(Single Point of Failure) on this Tier level.<br />
The four classes Tier I to IV, listed in the American standard TIA-942, are associated with the Uptime Institute and<br />
were integrated by the TIA-942 for classification purposes. However, it is important to note that the Tiers in TIA-942<br />
only resemble the Uptime Institute Tiers on the surface. The Tier classification of the Uptime Institute is based on<br />
a comprehensive analysis of a data center, from the point of view of the user and the operator. The TIA on the<br />
other hand is standard-oriented and provides decisions on the basis of a Yes/No checklist.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 25 of 156
www.datacenter.rdm.com<br />
For cabling structure designers, the TIA-942 Tiers offers an appropriate basis to design a data center. The names<br />
show some differences too. TIA-942 uses Latin numbers, whereas the Uptime Institute uses Roman numbering.<br />
Implementation Time and Investment Costs<br />
TIER I TIER II TIER III TIER IV<br />
Implementation time 3 months 3 - 6 months 15 - 20 months 15 - 20 months<br />
Relative investment costs 100% 150% 200% 250%<br />
Costs per square meter ~ $ 4,800 ~ $ 7,200 ~ $ 9,600 ~ $ 12,000<br />
Source: Uptime Institute<br />
2.3. Governance, Risk Management and Compliance<br />
Today, a large number of organizations (public authorities and private companies) rely on modern information and<br />
communication technologies (ICT). Information technology provides the basis for business processes. More and<br />
more companies are therefore highly dependent on IT systems that work flawlessly.<br />
In 2008, the Federal Statistical Agency of German supplied the following information:<br />
• 97% of all companies with more than 10 employees use IT systems;<br />
• 77% of these companies present their products on the Internet;<br />
• 95% of all companies in Germany make use of the Internet.<br />
Since the 1960s, Switzerland ranks second worldwide (behind the U.S.) in terms of using IT. In proportion to the<br />
size of the population, more computers are used in Switzerland than in other European countries.<br />
Of business that suffered a major system failure ...<br />
43%<br />
28%<br />
23%<br />
8%<br />
… never recover<br />
… go out of business<br />
within three years<br />
… are acquired or merge<br />
… fully recover<br />
Source: Wirtschaftswoche magazine, Issue 18, 24 April 1997<br />
Governance, risk management and compliance or GRC is an umbrella term that covers an organization's approach<br />
to these three areas.<br />
Page 26 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
IT Governance<br />
IT Risk Management<br />
IT Compliance<br />
In 2010, a scientifically established definition,<br />
validated by GRC experts, of Integrated<br />
Governance, Risk and Compliance Management<br />
was published (Racz et al. 2010): “GRC is a<br />
holistic approach that ensures an organization<br />
acts ethically and in accordance with its risk<br />
appetite, internal policies and external<br />
regulations through the alignment of strategy,<br />
processes, technology and people, to improve<br />
efficiency and effectiveness.”<br />
Governance<br />
Governance is corporate management by defined guidelines. These guidelines include the definition of corporate<br />
goals, the planning of the necessary resources to accomplish them and the methods applied to implement them.<br />
Risk Management<br />
Risk management is the set of processes by which management identifies, analyzes, and, where necessary,<br />
responds appropriately to risks that might adversely affect realization of the organization's business objectives.<br />
Possible responses include early recognition, controlling, avoiding, and damage control.<br />
Compliance<br />
Compliance means conforming to internal and external standards in the flow and processing of information.<br />
Requirements include parameters from standardization drafts, data access regulations and their legal bases.<br />
The relevance of these three areas in terms of IT systems will be discussed in the following sections.<br />
2.3.1. IT Governance<br />
IT governance refers to the corporate structuring, controlling and monitoring of the IT systems and IT processes in<br />
a company. Its objective is to dovetail IT processes with corporate strategy; it is closely linked to overall corporate<br />
governance.<br />
The main components of IT governance are:<br />
• Strategic Alignment: continuously aligning IT processes and structures with strategic enterprise objectives<br />
• Resources Management: responsible and sustainable use of IT resources<br />
• Risk Management: identification, evaluation and management of IT-related risks<br />
• Performance Measurement: measuring the performance of IT processes and services<br />
• Value Delivery: monitoring and evaluating IT contributions<br />
The implementation of IT governance is supported by powerful, internationally recognized procedures. These<br />
include:<br />
• Control model – focus financial reporting: COSO (Committee of Sponsoring Organizations of the<br />
Treadway Commission)<br />
• Corporate governance in information technology: ISO/IEC 38500:2008<br />
• Control model for IT management: Cobit (Control Objectives for Information and Related Technology)<br />
• Implementation of IT service management: ISO 20000, ITIL (Information Technology Infrastructure<br />
Library)<br />
• Information security: ISO/IEC 27002 and basic IT protection catalogs<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 27 of 156
www.datacenter.rdm.com<br />
2.3.2. IT Risk Management<br />
IT risk management helps to ensure an organization’s strategic objectives are not jeopardized by IT failure.<br />
The term risk refers to any negative deviation from planned values – whereas chance refers to any positive<br />
deviation.<br />
Reasons to implement an IT risk management system:<br />
• Legal aspect<br />
o<br />
• Economic aspects<br />
o<br />
o<br />
o<br />
o<br />
• Operational aspects<br />
o<br />
Careful management (see SOX, KonTraG, RRG)<br />
Providing fundamental support by reducing errors and failures in IT systems<br />
Negotiating advantage with potential customers by enhancing their trust in the ability to supply<br />
Advantage in rating assessment of banks for credit approval<br />
Premium advantages with insurance policies (e.g. fire and business interruption insurance)<br />
Wide range of applications and saving options<br />
There are four classic risk response strategies:<br />
• Avoidance<br />
• Mitigation<br />
• Transfer<br />
• Acceptance<br />
IT risks can be categorized as follows:<br />
• Organizational risks<br />
• Legal and economic risks<br />
• Infrastructural risks<br />
• Application and process-related risks<br />
2.3.3. IT Compliance<br />
IT compliance means conforming to current IT-related requirements, i.e. laws, regulations, strategies and<br />
contracts. Compliance requirements typically include information security, availability, data storing and data<br />
protection.<br />
IT compliance mainly affects stock corporations and limited liability companies (Ltd.) since in these companies<br />
CEOs and management can be held personally liable for compliance with legal regulations. Non-observance may<br />
result in civil and criminal proceedings. The Federal <strong>Data</strong> Protection Act in Germany specifies a custodial<br />
sentence of up to two years or a fine in the event of infringement.<br />
Frequently, there are standards and good practice guidelines to comply with, generally by contractual agreement<br />
with customers or competitors.<br />
The core task basically involves documentation and the resulting adaptation of IT resources as well as the<br />
analysis and evaluation of potential problems and hazards (see also risk analysis). IT resources refer to hardware,<br />
software, IT infrastructures (buildings, networks), services (e.g. web services) and the roles and rights of software<br />
users. It is crucial that the implementation of compliance is understood to be a continuous process and not a<br />
short-term measure.<br />
2.3.4. Standards and Regulations<br />
There are numerous bodies responsible for developing and specifying security standards and regulations<br />
worldwide.<br />
Page 28 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The German Federal Association for Information Technology, Telecommunications and New Media (BITKOM) has<br />
compiled basic standards on IT security and risk management within the compass of their IT security standards<br />
(current edition 2009). Below are some extracts which are of particular relevance for data centers.<br />
Information<br />
Security<br />
Management<br />
System<br />
(ISMS)<br />
Security<br />
measures<br />
and<br />
monitorin<br />
g<br />
Risk<br />
mgmt<br />
Relevant<br />
standards<br />
Regulations<br />
ISO/IEC 27001<br />
ISO/IEC 27002<br />
ISO/IEC 27006<br />
IT-GS (DE)<br />
ISO/IEC 18028<br />
ISO/IEC 24762<br />
BS 25777:2008<br />
ISO/IEC 27005<br />
MaRisk / MaRisk VA<br />
COSO<br />
ISO/IEC 38500<br />
Cobit<br />
ITIL<br />
IDW PS 330 (DE)<br />
SWISS GAAP FER (CH)<br />
KonTraG (Ger) / RRG (CH)<br />
SOX / EURO SOX<br />
Basel II/III<br />
Solvency II<br />
BDSG (Ger) / DSG (CH)<br />
Enterprise type<br />
Banks/insurances ∅ ∅ ∅ ∅ ∅ <br />
Authorities/administrations ∅ ∅ ∅ ∅ <br />
Consultancy firms ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅<br />
HW/SW manufacturers ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ <br />
IT service providers ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ <br />
Public health system ∅ ∅ ∅ ∅ ∅ <br />
Law firms ∅ ∅ ∅ ∅ ∅ ∅ ∅ <br />
Skilled trades and industry ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ <br />
Service providers ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ <br />
International orientation ∅ ∅ ∅ ∅ <br />
Relevance: High<br />
∅ Medium<br />
Low<br />
Information Security Management System (ISMS)<br />
• ISO/IEC 27001<br />
This standard describes the basic requirement concerning the ISMS in an organization (company or<br />
public authority). It emerged from the British standard BS 7799-2.<br />
Its objective is to specify the requirements for an ISMS in a process approach.<br />
The standard primarily addresses company management and IT security managers, and secondarily<br />
implementation managers, technicians and administrators.<br />
The ISMS implementation can be audited by internal and external auditors.<br />
• ISO/IEC 27002<br />
This is a guide to information security management. The standard emerged from the British BS 7799-1.<br />
Basically, this standard is to be applied where a need for information protection exists.<br />
The standard addresses IT security managers.<br />
• ISO/IEC 27006<br />
Requirements for bodies providing audits and certifications of information security management systems.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 29 of 156
www.datacenter.rdm.com<br />
• IT Baseline Protection<br />
Since 1994, the Federal Office for Information Security (BSI) in Germany has been issuing the IT<br />
Baseline Protection Manual, which provides detailed descriptions of IT security measures and<br />
requirements for the IT security management.<br />
In 2006 the manual was adapted to international standards, rendering it fully compatible with ISO/IEC<br />
27001 while also incorporating the recommendations specified in ISO/IEC 27002.<br />
ISO/IEC 27001 certification based on IT baseline protection can be applied for with the BSI.<br />
Security Measures and Monitoring<br />
The following standards deal with enhancing IT network security. IT network security is not restricted to internal<br />
corporate networks, but also includes the security of external network access points. A selection of standards<br />
follows below.<br />
• ISO/IEC 18028 (soon 27033)<br />
The objective of this standard is to focus on IT network security by specifying detailed guidelines aimed<br />
at different target groups within an organization. It includes security aspects in the handling, maintenance<br />
and operation of IT networks plus their external connections.<br />
The standard comprises five parts:<br />
1. Guidelines for network security<br />
2. Guidelines for the design and implementation of network security<br />
3. Securing communications between networks using Security Gateways<br />
4. Remote access<br />
5. Securing communications between networks using Virtual Private Networks (VPN)<br />
• ISO/IEC 24762<br />
This standard provides guidelines on the provision of information and communications technology<br />
disaster recovery (ICT DR) services. It includes requirements and best practices for implementing<br />
disaster recovery services for information and communications technologies, for example emergency<br />
workstations and alternate processing sites.<br />
• BS 25777:2008<br />
The objective of this standard is to establish and maintain an IT Continuity Management system.<br />
Risk Management<br />
• ISO/IEC 27005<br />
This standard provides guidelines for a systematic, process-oriented risk management system that<br />
supports the requirements for a risk management in accordance with ISO/IEC 27001.<br />
• MaRisk / MaRisk VA<br />
The Minimum Requirements for Risk Management (MaRisk) for banks were first issued in 2005 by the<br />
German Federal Financial Supervisory Authority (BaFin). They include requirements for IT security and<br />
disaster recovery planning. In 2009 another edition was published, which was expanded to include<br />
insurance companies, leasing and factoring companies (MaRisk VA).<br />
Relevant Standards<br />
• COSO<br />
The COSO model was developed by the organization of the same name (Committee of Sponsoring<br />
Organizations of the Treadway Commission). It provides a framework for internal control systems and<br />
describes:<br />
o<br />
o<br />
o<br />
o<br />
o<br />
A method for introducing the primary components of an internal control system, e.g. the control<br />
environment in a company<br />
Risk evaluation procedures<br />
Specific control activities<br />
Information and communication measures in a company<br />
Measures required for the monitoring of the control system<br />
COSO is the basis of reference models like Cobit and facilitates their introduction.<br />
Page 30 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
• ISO/IEC 38500<br />
The standard "Corporate Governance in Information Technology – ISO/IEC 38500” describes six key<br />
principles for corporate governance in IT:<br />
o<br />
o<br />
o<br />
o<br />
o<br />
o<br />
Responsibility: adequate consideration of IT interests by the top management<br />
Strategy: include IT potential and IT strategy in corporate strategy planning and align IT<br />
strategy with corporate principles<br />
Acquisition: rigorously demand-oriented IT budget plan, based on transparent decision making<br />
Performance: align IT services with the requirements of corporate areas and departments<br />
Conformance: IT systems compliance with applicable laws, regulations and internal and<br />
external standards<br />
Human Behavior: the needs of internal and external IT users are taken into account in the IT<br />
concept<br />
The standard assigns three functions to each of these principles:<br />
o<br />
o<br />
o<br />
• Cobit<br />
Assessment: continuous assessment of IT performance<br />
Control: controlling the business-specific focus of IT measures<br />
Monitoring: systematic monitoring of the IT compliance and the IT systems productivity<br />
To support management and IT departments in their responsibilities in connection with IT governance,<br />
the ISACA association (Information Systems Audit and Control Association) created Cobit (Control<br />
Objectives for Information and Related Technology), a comprehensive control system and framework<br />
encompassing all aspects of information technology use, from the planning to operation and finally<br />
disposal, all in all providing a comprehensive view of IT issues.<br />
• ITIL<br />
Here are the Cobit elements and their respective target groups:<br />
o Executive summary Senior executives such as the CEO, CIO<br />
o Governance and control framework Senior operational management<br />
o Implementation guide Middle management, directors<br />
o Management guidelines Middle management, directors<br />
o Control objectives Middle management<br />
o IT assurance guide Line management and auditors<br />
ITIL is a best practices reference model for IT service management (ITSM). The acronym ITIL originally<br />
derived from "IT Infrastructure Library", though the current third edition has a wider scope.<br />
The objective of ITIL is to shift the scope of the IT organization from the purely technological to include<br />
processes, services and customers needs.<br />
The ITIL V3 version improved the strategic planning process to dovetail IT service management with the<br />
corporate strategy and thus helped ensure compatibility with the IT service management standard ISO<br />
20000.<br />
IT professionals can acquire ITIL certification. There are three levels:<br />
1. Foundation<br />
2. Intermediate<br />
3. Advanced<br />
Companies can have their process management certified in accordance with the international standard<br />
ISO/IEC 20000 (formerly BS 15000).<br />
• IDW PS 330 (Germany)<br />
When performing an annual audit, an auditor is required by law to examine the (accounting-relevant) IT<br />
systems of the audited company. In accordance with articles 316 to 324 of the German Commercial<br />
Code (HGB), the auditor also assesses the regularity of the accounting system and thus compliance with<br />
principles of correct accounting (GoB) as stipulated in articles 238 ff., in article 257 of the HGB as well as<br />
in articles 145 to 147 of the Tax Code (AO).<br />
Furthermore, IT systems are audited according to principles of regular data-processing supported<br />
accounting systems (GoBS) and the accompanying letter of the Federal Ministry of Finance (BMF). The<br />
principles of data access and verification of digital documents (GDPdU) as stipulated in the BMF letter,<br />
containing regulations for the storing of documents, need also to be considered.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 31 of 156
www.datacenter.rdm.com<br />
Depending on the complexity of the IT systems in operation, comprehensive testing in accordance with<br />
the IDW Auditing Standard 330 (IDW PS 330) of the IT system or of selected units or subsystems of the<br />
IT system may be required.<br />
• SWISS GAAP FER (Switzerland)<br />
Regulations<br />
The Swiss GAAP FER focuses on the accounting system of small and medium-sized organizations and<br />
companies operating on a national level. Also included are non-profit organizations, pension funds,<br />
insurance companies and property and health insurers. These organizations are provided with an<br />
effective framework for authoritative accounting to provide a true and fair view of the company's net<br />
assets, financial position and earnings situation. Promoting communication with investors, banks and<br />
other interested parties is also a GAAP FER objective. Moreover, it increases comparability of annual<br />
financial reports across time and between organizations.<br />
• KonTraG (Germany)<br />
The KonTraG (Control and Transparency in Business Act) came into effect in 1998. It is not a law by<br />
itself but a so-called amending act (Artikelgesetz) meaning that amendments and changes must be<br />
incorporated by other economic laws such as the Stock Corporation Act, the Commercial Code or the<br />
Limited Liability Companies Act (GmbHG).<br />
The KonTraG is aimed at establishing business control and transparency in stock corporations and<br />
limited liability companies. This is achieved by setting up a monitoring system for the early identification<br />
of developments that threaten their existence and by requiring management to implement a corporate<br />
risk management policy. The act stipulates personal liability of members of the board of management,<br />
the board of directors and the managing director in the event of any infringement.<br />
• Accounting and Auditing Act (RRG, Switzerland)<br />
The comprehensive revision of Switzerland's audit legislation in 2008 made risk assessment compulsory.<br />
It is now subject to review by the auditing body. Overall responsibility and responsibility for monitoring lies<br />
with the highest decision-making and governing body of the company, e.g. the board of directors in a<br />
stock corporation. Responsibility for introduction and implementation lies with the board of managers.<br />
The revision of the auditing obligations is applicable to all corporate forms, i.e. stock corporations and<br />
companies in the form of limited partnerships, limited liability companies, collectives, and also<br />
foundations and associations. Publicly held companies and companies of economic significance in this<br />
respect need to subject their annual financial statements to proper auditing.<br />
• SOX (US)<br />
The Sarbanes-Oxley Act of 2002 (also called SOX, SarbOx or SOA) is a United States federal law<br />
enacted on July 30, 2002. The bill was enacted as a reaction to a number of major corporate and<br />
accounting scandals including those concerning Enron or WorldCom. Its objective is to improve the<br />
reliability of accurate financial reporting by those companies which dominate the nation's securities<br />
market.<br />
The bill defines responsibilities of management and external and internal auditors. The companies have<br />
to prove that they have a functional internal auditing system. The boards are responsible for the accuracy<br />
and validity of corporate financial reports.<br />
The bill's provisions apply to all companies worldwide that are listed on an American stock exchange,<br />
and, in certain cases, their subsidiaries as well.<br />
• EURO-SOX<br />
The 8th EU Directive, also known as EURO-SOX, came into effect in 2006. It is aimed at establishing an<br />
internationally recognized regulation for the auditing of financial statements in the European Union (EU).<br />
It closely resembles its American equivalent, the SOX act.<br />
But unlike SOX, EURO-SOX applies to all capital companies, not only to market-listed companies. Small<br />
and medium-sized companies are required to address issues such as risk management, IT security and<br />
security audits.<br />
In Germany, the EU directive was incorporated into the Accounting Law Modernization Act (BilMoG),<br />
turning it into an applicable national law, mandatory as of financial year 2010-1.<br />
Page 32 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
• Basel II<br />
The term Basel II refers to the entire set of directives on equity capital that were put forward by the Basel<br />
Committee for Banking Supervision in the past years. These directives apply to all banking institutions<br />
and financial service providers. In Switzerland, its implementation was carried out by the FINMA and in<br />
Germany the rules were implemented through the German Banking Act (KWG), the Solvency Regulation<br />
(SolvV) and the Minimum Requirements for Credit Management (MaRisk).<br />
Basel II primarily relates to internal banking regulations. Banks apply credit-risk-related standards to their<br />
customers that are the same as the ones with which they have to comply. This means that in the private<br />
sector as well, loan conditions and credit risks are directly dependent.<br />
• Basel III<br />
The term Basel III refers to the rules and regulations adopted by the Basel Committee at the Bank for<br />
International Settlements in Basel (Switzerland) as an extension of the existing equity capital regulations<br />
for financial institutions. These supplementary rules and regulations are based on the experiences with<br />
Basel II, issued in 2007, and on knowledge gained as a result of the global financial crisis of 2007.<br />
The provisional end version of Basel III was published in December 2010, even though individual aspects<br />
are still under discussion. Implementation within the European Union is achieved by amendments to the<br />
Capital Requirements Directive (CRD) and expected to come into effect gradually starting in 2013.<br />
• Solvency II<br />
Solvency II is the insurance industry's equivalent of Basel II. It is scheduled to come into effect in<br />
European Union member states in the first quarter of 2013.<br />
• Federal <strong>Data</strong> Protection Act (BDSG, Germany) / <strong>Data</strong> Protection Act (DSG, CH)<br />
<strong>Data</strong> protection acts regulate the handling of personal data. From a data center perspective, the technical<br />
and organizational measures that might be necessary are of primary relevance, especially those concerning<br />
regulations on access control and availability control.<br />
In addition to national legislation, there are state-specific and canton-specific data protection laws, in<br />
Germany and Switzerland respectively.<br />
2.3.5. Certifications and Audits<br />
Certification and audit preparations are costly and time-consuming, therefore every organization must ask themselves<br />
whether it makes economic sense or not. Possible motivation and reasons for certification are:<br />
• Competitive differentiation:<br />
A certification can strengthen customer, staff and public trust in an organization. It can also make an<br />
organization stand out among competitors, provided it is not a mass product. Certifications are also<br />
gaining importance in invitations to tender.<br />
• Internal benefit of a certification:<br />
A certification can have the additional benefit of reducing possible security weaknesses.<br />
• Regulations<br />
The major certifications and audits in the data center sector are:<br />
• DIN EN ISO 9001- Quality Management<br />
• ISO 27001 – Information Security Management Systems<br />
• ISO 27001 – based on Basic IT Protection<br />
• ISO 20000 – IT Service Management<br />
• DIN EN ISO 14001 – Environmental Management<br />
• SAS 70 Type II certification – SOX relevant<br />
• IDW PS951 – German equivalent of the American SAS70<br />
• <strong>Data</strong> <strong>Center</strong> Star Audit – ECO Association (Association of the German Internet Economy)<br />
• TÜV certification for data centers – (Technical Inspection Association, Germany)<br />
• TÜV certification for energy-efficient data centers<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 33 of 156
www.datacenter.rdm.com<br />
2.3.6. Potential Risks<br />
According to a report by the IT Policy Compliance Group of 2010, 80 percent of companies have poor visibility into<br />
their IT risks, taking three to nine months or longer to classify their IT risk levels. Inability to prioritize risks, lack of<br />
a comprehensive risk view and inadequate control assessments all contribute to this problem.<br />
Liability Risks<br />
Need for<br />
regulation, need<br />
for action<br />
Strategic<br />
tasks<br />
Conceptual<br />
tasks<br />
Operative<br />
tasks<br />
Responsibilities Legislation Potential damage and losses<br />
Management /<br />
CEO<br />
Supervisory Board<br />
Management /<br />
CEO<br />
<strong>Data</strong> Protection<br />
Officer<br />
Head of IT<br />
Management / CEO<br />
<strong>Data</strong> Protection<br />
Officer<br />
Head of IT<br />
Staff<br />
See<br />
regulations<br />
See<br />
regulations<br />
Employment contract<br />
See<br />
regulations<br />
Employment contract<br />
Commercial Code (HGB)<br />
Copyright Act (UrhG)<br />
Penal Code (StGB)<br />
Excerpt: Liability Risk Matrix (Matrix der Haftungsrisiken), Bitkom, as of March 2005<br />
- Losses due to system failure<br />
- Insolvency<br />
- Increased costs of corporate loans<br />
- Loss of insurance coverage<br />
- Image loss<br />
- Monetary fines<br />
- See strategic tasks<br />
- <strong>Data</strong> loss<br />
- Unauthorized access<br />
- Virus infection<br />
- Loss due to failed projects<br />
- Loss of claims against suppliers<br />
- Loss of development know-how<br />
- No annual audit confirmation<br />
- Taxation assessment<br />
- Imprisonment<br />
- Corporate shutdown/loss of production<br />
- Capital losses<br />
- Image loss<br />
- Loss of business partners or data<br />
Companies have taken to including limited liability for negligent behavior in agreements between managing<br />
directors and the company. In the case of limited-liability companies, members have the option of declaring<br />
acceptance of the managing director, rendering any claims for damages against the company null and void. This<br />
is not true in the case of stock corporations, where acceptance of management does not result in the waiver of<br />
damage claims.<br />
Good insurance is a valuable asset because...<br />
… Managers fail to see their liability risks<br />
… Managers are usually not clear about the scope of their liability risk<br />
… There is a growing risk of being held liable for mistakes at work and losing everything in the process<br />
… Managing directors and management can be held liable if they fail to provide sufficient IT security for their<br />
companies<br />
… The boss is not the only one held liable!<br />
Operational Risks<br />
Operational risks – as defined in Basel II – are "the risks of loss resulting from inadequate or failed internal<br />
procedures, people and systems or from external events".<br />
The primary focus, in terms of operational risks, is on:<br />
• Information system failures<br />
• Security policy<br />
• Human resource security<br />
• Physical and environment security<br />
• IT system productivity<br />
• Keeping systems, procedures and documentation up-to-date<br />
• Information and data of all business operations<br />
• Evaluation and identification of key figures<br />
• Clear, previously drawn-up emergency plan<br />
• Backups of the entire data base<br />
Page 34 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Source:<br />
Marsh – business continuity<br />
benchmark report 2010 –<br />
survey of 225 companies<br />
Among the factors that can disrupt business processes are:<br />
• Natural disasters, acts of God (fire, flooding, earthquake, etc.)<br />
• Breaches of data and information security<br />
• Fraud, cybercrime and industrial espionage<br />
• Technical failure and loss of business-critical data<br />
• Terror attacks and transporting hazardous goods<br />
• Disruption of business processes or the supply chain and disturbances occurring on business partners’<br />
premises<br />
• Organizational deficiencies, strikes, lockouts<br />
• Official decrees<br />
"Experience tells us that a<br />
fire can break out at any<br />
given moment. When decades<br />
go by without a fire, it<br />
doesn't mean that there is<br />
no danger, it simply means<br />
that we have been lucky so<br />
far. Yet, the odds are still<br />
the same and a fire can still<br />
break out at any given moment."<br />
This succinct statement,<br />
made by a Higher Administrative<br />
Court in Germany in<br />
1987, still says it all.<br />
It is obvious that data<br />
centers must be equipped<br />
with reliable, fast, early firedetection<br />
systems, extinguishing<br />
systems and fire<br />
prevention systems.<br />
Water as extinguishing agent, however, is the wrong choice altogether. Today, specialized companies supply<br />
specific extinguishing systems for every type of fire. In the case of an expansion or later security measures in the<br />
data center, specific planning of systems is important.<br />
Cost of Downtime<br />
In 2010, market researchers from Aberdeen Research determined that only 3% of companies in the past twelve<br />
months were running a data center that had 100-percent availability. Only 4 percent of companies stated the<br />
availability of their data center was 99.999 percent.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 35 of 156
www.datacenter.rdm.com<br />
The following diagram shows how to calculate the cost of downtime.<br />
Lost end-user productivity = Hourly cost of users impacted x Hours of downtime<br />
Costs arising<br />
during the<br />
downtime<br />
Lost IT productivity =<br />
Hourly cost of the IT staff<br />
impacted<br />
x<br />
Hours of downtime<br />
Lost sales = Loss of sales per hour x Hours of downtime<br />
Costs arising after<br />
the downtime<br />
Other loss of business<br />
=<br />
Image damage<br />
(customer service)<br />
+ Costs of overtime<br />
= Missed deadlines + Contractual fines or fees<br />
Source: securitymanager 2007<br />
The report "Avoidable Cost of Downtime" was commissioned by CA in 2010. The survey was carried out among<br />
more than 1,800 companies, 202 of them in Germany. It reveals that systems in German companies are shut<br />
down for an average of 14 hours per year, 8 hours of which are required for full data restoration.<br />
France € 499,358<br />
Germany € 389,157<br />
Norway € 320,069<br />
Spain € 316,304<br />
Sweden € 279,626<br />
Netherlands € 274,752<br />
Finland € 263,314<br />
UK € 244,888<br />
Denmark € 161,655<br />
Belgium € 84,050<br />
Italy € 33,844<br />
A publication by Marsh before the year 2000 provided the following figures:<br />
• Costs for a 24-hour downtime are approx.:<br />
o Amazon 4.5 million US dollars<br />
o Yahoo! 1.6 million US dollars<br />
o Dell 36.0 million US dollars<br />
o Cisco 30.0 million US dollars<br />
• AOL, Aug. 6, 1996, breakdown of 24 hours due to maintenance and<br />
human error, estimated costs were 3 million US dollars for discounts<br />
• AT&T, April 4, 1998, 6-26 downtime due to software conversion, costs<br />
were 40 million US dollars for discounts<br />
IT system downtimes that last longer than three days are considered existencethreatening<br />
for companies!<br />
54 percent of customers switched to a competitor as a result of an IT system<br />
failure (Symantec study, 2011).<br />
2.4. Customer Perspective<br />
The following sections focus on the point of view of customers and decision-makers; their motives and what they<br />
expect from a data center, taking into account operational costs and energy consumption costs. The various<br />
aspects to consider when deciding between outsourcing and operating your own data center will also be<br />
discussed.<br />
2.4.1. <strong>Data</strong> <strong>Center</strong> Operators / Decision-Makers<br />
When discussing decision-makers and influencers with regards to decisions around data centers, it is best to<br />
distinguish between these two types:<br />
• enterprise data centers and<br />
• service data centers<br />
Page 36 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Enterprise <strong>Data</strong> <strong>Center</strong>s<br />
Organizations and persons in charge with respect to data centers need to be differentiated by data center type –<br />
from single server cabinet to large data center.<br />
Corporate group directives<br />
ICT, IT<br />
CIO, Head of Information Technology, IT Director, IT Manager<br />
In-house<br />
Planning<br />
Design of IT architecture<br />
Network planning, network<br />
technology<br />
IT Department, IT Operations,<br />
Infrastructure<br />
Management/Services,<br />
Technology Management, IT<br />
Production, Technical<br />
Services/Support, <strong>Data</strong> <strong>Center</strong><br />
Facility Management<br />
Purchasing Department,<br />
Technical Procurement,<br />
Procurement<br />
External<br />
Consultants, data<br />
center planners<br />
Suppliers, installers,<br />
service providers<br />
Partner companies<br />
Rental property<br />
owners<br />
There are three organizational groups in a company that influence the purchasing decision:<br />
• ICT • Facility Management • Procurement<br />
The IT department is basically an internal customer of facility management; the relationship between the two is<br />
often characterized by thinking in boxes. Thinking patterns in the IT department are re-structured in cycles of 2 to<br />
3 years whereas facility management is inclined to move in sync with the construction industry, i.e. in a 20-year<br />
rhythm.<br />
The facility staff in charge of data centers is mostly positioned in middle management or at<br />
the head of department level and they usually have a technical background.<br />
Facility departments vary considerably between companies. Companies dealing in rental<br />
objects often have none, or one with little influence. In many cases facility management is<br />
outsourced.<br />
It is similar with IT; the following subsections are the most influential:<br />
• IT operations (infrastructure management, technology management, IT production,<br />
technical services, infrastructure services, data center, and others)<br />
• Network planning (network technology, network management, and others)<br />
• Heads of projects for new applications, infrastructure replacement, IT risk management measures, and<br />
others<br />
Another strong influence factor is that management (CIO, Head of Information Technology, IT Director, IT<br />
Manager) often changes or reverses decisions made in the facility management. Management decisions might be<br />
affected by all-in-one contracts, supplier contracts with service providers, and corporate policy.<br />
If and to what extent the procurement department is involved in purchase decisions mainly depends on their<br />
position and on the corporate structure.<br />
Also very powerful in terms of their influence on decisions are corporate group directives, especially in the case of<br />
takeovers and mergers. Procurement practices might change drastically, projects postponed or cancelled and new<br />
contact persons have to be found.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 37 of 156
www.datacenter.rdm.com<br />
Moreover, the influence factor of external consultants and experts needs to also be taken into account, the most<br />
important ones being:<br />
• Consultants held in high esteem by a company's management and contracted to provide support with a<br />
project.<br />
• <strong>Data</strong> center planners who are consulted because of their special know-how and relevant reference<br />
projects.<br />
• Suppliers of hardware, software and data center equipment who are interested in selling their products. If<br />
there is a longstanding business relationship, their recommendations must not be underestimated.<br />
Moreover, they are certain to have advance information on the ongoing project.<br />
• Installers (electrical work, cabling measures) who did good work for the company in the past. Their<br />
preference for systems and solutions as well as a good price offer usually leads to successful recommendations.<br />
• Suppliers like system integrators and technology partners who have successfully worked for the company<br />
before.<br />
• Partner companies with specific requirements for their suppliers and partner companies, with longstanding<br />
contacts at management level exchanging recommendations.<br />
• Rental property owners who adopted the facility management in their lease terms.<br />
Service <strong>Data</strong> <strong>Center</strong>s<br />
Corporate group directives<br />
Management<br />
In-hous<br />
Technical department, operations,<br />
head of projects<br />
Network management<br />
Procurement<br />
External<br />
Consultants, data<br />
center planners<br />
Suppliers, installers,<br />
service providers<br />
Partner companies<br />
Rental property<br />
owners<br />
Here, the main influencers are the:<br />
• technical department (operations) and<br />
• network management<br />
When larger tasks are planned (extensions, conversions, new constructions or individual customer projects)<br />
project groups are formed. They prepare the management decisions. Most of these organizations work with a flat<br />
hierarchy, and the decision-makers are to be found in management.<br />
With global data center operators, specifications for products and partners are being increasingly managed<br />
centrally.<br />
Here too, the influence of external consultants and experts is considerable.<br />
As a rule, data center planners (engineering firms, specialized consultancy firms) take on the overall planning of a<br />
data center – i.e. from finding the right location, to the building shell, safety and fire protection facilities, power<br />
supply and raised floor design. In some cases, they are also involved with the selection of server cabinets but they<br />
are usually not concerned with passive telecommunications infrastructures.<br />
However, the market is undergoing changes. <strong>Data</strong> center planners are often commissioned to take on the entire<br />
project (new construction, extension, upgrade), including the passive telecommunications infrastructure. They<br />
usually work together in a partnership with industry-specific companies for individual projects. Often these planners<br />
also support tendering and contracting procedures and are consulted by management on decision processes.<br />
Page 38 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
There is also a trend for companies to offer a complete package by way of acquisitions, from planning to<br />
construction, up to the complete equipment including cabinets and cabling, thus using the "one-stop shopping"<br />
aspect as a selling point. Typical examples are Emerson and the Friedrich Loh Group.<br />
Often, service data center customers themselves hire service providers to equip their rented space or they define<br />
the specifications for products and solutions that service providers have to comply with.<br />
Service providers equipping data centers strongly influence the product selection. They usually have two sections,<br />
network planning and installation.<br />
<strong>Data</strong> center operators like to work with companies they know and trust, and they are open to their product<br />
recommendations and positive experience reports. This in turn affects the operators' purchase decisions, also due<br />
to their assumption that it will lower installation costs.<br />
When the data center is located in a rental property, one must also respect the owner's specifications.<br />
2.4.2. Motivation of the Customer<br />
A data center project requires large staff and personnel resources and it involves risks. Here are listed reasons for<br />
a company to undertake such a project – again divided by data center operator type:<br />
Enterprise <strong>Data</strong> <strong>Center</strong>s<br />
• Increasing occurrence of IT failures<br />
• Existing capacities are not sufficient to meet the growing requirements, often due to increased power<br />
needs and climate control requirements<br />
• Technological developments in applications and concepts, such as the server and storage landscape,<br />
and virtualization and consolidation measures<br />
• Latency requirements, resulting from business models and applications<br />
• Requirements resulting from IT risk management and IT compliance, such as:<br />
o Backup data centers, disaster data centers<br />
o Increasing availability<br />
o Structuring problems<br />
o The data center is located in a danger zone<br />
o Inspection of security concepts<br />
• Consequences of IT governance requirements<br />
• Focusing on core competencies of the company<br />
• The existing data center will not be available in the future because it is moved or the operator will cease<br />
operation<br />
• The systems installed are not state-of-the-art and no longer profitable<br />
• The terms of existing contracts are expiring (outsourcing, service contracts, rental agreements, hardware,<br />
and others)<br />
• Cost-saving requirements<br />
• <strong>Data</strong> center consolidation<br />
• Corporate restructurings<br />
• Centralization requirements<br />
• Continual improvement process<br />
• And many more<br />
Service <strong>Data</strong> <strong>Center</strong>s<br />
• Development of additional areas<br />
• Development of a new site<br />
• Customer requirements<br />
• Continual improvement process<br />
• Improving performance features<br />
• Cost reduction<br />
• Corporate directives<br />
• And many more<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 39 of 156
www.datacenter.rdm.com<br />
2.4.3. Expectations of Customers<br />
The following sections will focus on customer implementations of enterprise data centers. Service data center<br />
operators can call upon experience in their efforts to meet the expectations of customers.<br />
Availability<br />
The first section will discuss availability, including hardware and software security, reliable infrastructure and<br />
organizational security. In connection with data centers, the main points are:<br />
• Uninterrupted IT operations<br />
• 24/7 availability, 24/7 monitoring, 24/7 access<br />
• Flexibility in power supply and climate control to meet specific needs (rising trend)<br />
• On-demand models can be displayed<br />
• Structural and technical fire protection<br />
• Security systems such as burglar alarms, access control, video surveillance, building security, security<br />
personnel, security lighting, central building controls systems, etc.<br />
• Choice between several carriers, redundant connections<br />
• Cost optimization<br />
• Green IT, sustainability<br />
• Energy efficiency<br />
• Short response times for upgrades and extensions<br />
• Low latencies to meet the growing requirements in terms of internet presence<br />
• <strong>Data</strong> centers should be outside of flood-risk areas, dangerous production sites, airports, power plants or<br />
other high-risk zones<br />
• Distance between two data center sites<br />
• Profitability<br />
It can be observed that availability requirements are very high at the beginning of a project, but they are not<br />
necessarily reflected in expectations in terms of price per m 2 /kVA.<br />
Availability = security = costs<br />
Companies are strongly recommended to check their requirements of availability on a project-specific level if<br />
appropriate and to evaluate alternative concepts (e.g. the 2-location concept).<br />
Determining availability not only has an impact on installation costs but also on operating costs. The Swiss Federal<br />
Office of Energy (SFOE) has developed a cost model for transparent analysis. The graphic below is part of it.<br />
<strong>Data</strong> <strong>Center</strong> Costs by Availability<br />
<strong>Data</strong> <strong>Center</strong> Kostenmodell nach Verfügbarkeit<br />
Costs in Swiss franc per square meter<br />
Availability<br />
Capital costs Operational costs excl. power Energy costs<br />
Source: Department of the Environment, Transport, Energy and Communications, Swiss Federal Office of Energy, Oct. 2008<br />
Page 40 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
According to an analysis by IBM, data center operational costs accumulated over a period of 20 years are 3 to 6<br />
times higher than the original investment costs, with energy costs constituting 75 percent of the operational costs.<br />
The following table shows the utilization rate of the components in data centers on the basis of the Tier levels,<br />
described in section 2.2.1.<br />
Degree of<br />
redundancy<br />
N N+1 N+1 N+1 N+1 2N 2(N+1) 2(N+1) 2(N+1) 2(N+1)<br />
Configuration 1 1+1 2+1 3+1 4+1 2 2(1+1) 2(2+1) 2(3+1) 2(4+1)<br />
Number of<br />
components<br />
Utilization rate of<br />
components<br />
1 2 3 4 5 2 4 6 8 10<br />
100% 50% 66% 75% 80% 50% 25% 33% 37.5% 40%<br />
According to an IBM analysis, the increase in cost between a Tier-III and a Tier-IV installation amounts to 37 percent.<br />
Energy Costs<br />
The growing use of ICT also has an impact on the environment. The ICT-related energy consumption in Germany<br />
has increased from 38 TWh in 2001 to 55 TWh in 2007. This equals 10.5 percent of the country's total energy<br />
consumption. The strongest growth was recorded in ICT infrastructures, i.e. servers and data centers.<br />
<strong>Data</strong> center energy consumption in Germany amounted to 10.1 TWh in 2008, generating CO2 emissions of 6.4<br />
million tons. Approx. 50 percent of this energy consumption and related CO2 emissions is not used by the actual<br />
IT systems but by the surrounding infrastructure, e.g. the cooling system and power supply.<br />
Since 2006, the importance of developing measures to reduce energy consumption in data centers has increased.<br />
Innovations relating to power supply and climate control also contribute to reductions in energy consumption, as<br />
do other cost-effective measures such as arranging the racks in hot aisle/cold aisle configurations, or avoiding<br />
network cables in front of ventilation fans. There are various guidelines and recommendations available on<br />
measures to increase energy efficiency in data centers, issued, for example, by BITKOM or solution providers. In<br />
some cases, financial incentives are available for energy-saving measures in data centers.<br />
A trend analysis commissioned by the German Federal Environment Agency (UBA) indicated that the energy<br />
consumption in Germany will increase from 10.1 TWh in 2008 to 14.2 TWh in 2015 in a "business as usual"<br />
scenario, while in a "Green IT" scenario it could be reduced down to 6.0 TWh by 2015.<br />
<strong>Data</strong> center types and quantities in<br />
Germany<br />
2008 Scenario 2015<br />
Business as usual<br />
Green IT<br />
Server cabinets 33,000 34,000 27,000<br />
Server rooms 18,000 17,000 14,000<br />
Small data centers 1,750 2,150 1,750<br />
Medium data centers 370 680 540<br />
Large data centers 50 90 75<br />
Energy demand in TWh 10.1 14.2 6.0<br />
When it cannot be measured, it cannot be optimized. This is the problem with many data centers. A large number<br />
of companies cannot provide accurate information as to how much energy is needed for the ICT equipment alone,<br />
expressed as a proportion of the overall consumption.<br />
A recent study by the ECO association in 2010 has revealed that only 33% of data center operators employ staffs<br />
who are responsible for energy efficiency and at least 37.5% of operators run operations without specialized<br />
personnel responsible for IT systems in their data centers. Still with 31.25% of all operators, the IT budget is not<br />
affected by energy consumption at all.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 41 of 156
www.datacenter.rdm.com<br />
There are several key benchmarks for assessing the energy efficiency of a data center. They were introduced in<br />
section 1.10. Here is some additional information:<br />
Green Grid (USA)<br />
PUE Power Usage Effectiveness<br />
DCIE <strong>Data</strong> <strong>Center</strong> Infrastructure Efficiency<br />
IEP IT Equipment Power<br />
TFP Total Facility Power<br />
Uptime Institute (USA)<br />
The ratio of power consumed by the entire facility to<br />
the power consumed by the IT equipment alone<br />
The ratio of power consumed by the IT equipment to<br />
the power consumed by the entire facility (=1/PUE, i.e.<br />
the reciprocal of PUE)<br />
The actual power load associated with all of the IT<br />
equipment including computing, storage and networking<br />
equipment<br />
The actual power load associated with the data center<br />
facility including climate control (cooling), power,<br />
surveillance, lighting, etc.<br />
SI-EER<br />
IT-PEW<br />
DC-EEP<br />
Site Infrastructure - Energy Efficiency<br />
Ratio<br />
IT Productivity per Embedded Watt<br />
The ratio of power consumed by the entire facility to<br />
the power consumed by the IT equipment alone<br />
The ratio of IT productivity (network transactions, storage,<br />
or computing cycles) to the IT equipment power<br />
consumption<br />
Value derived by multiplying SI-EER with IT-PEW<br />
The most widely-used benchmark is the PUE but caution is advised since the data used in the calculation is not<br />
always comparable.<br />
The table on the right is an approximate evaluation<br />
from the study carried out by the German Federal<br />
Environment Agency (UBA) in 2010.<br />
"Good" PUE values are currently trying to outdo one<br />
another on the data center market; there are even<br />
values below 1, which seems hard to believe.<br />
Ø PUE<br />
<strong>Data</strong> <strong>Center</strong> Type 2008 2015 Green IT<br />
Server cabinet 1.3 1.2<br />
Server room 1.8 1.5<br />
Small data center 2.1 1.5<br />
Medium data center 2.2 1.6<br />
Large data center 2.2 1.6<br />
As discussed above, operational costs are usually many times higher than acquisition costs, the PUE of a data<br />
center being a key factor. Below, expected acquisition costs are compared with energy costs, using a server as<br />
example. Not included are the related data center costs for rack space, maintenance, monitoring, etc.<br />
Server hardware acquisition cost 1,500.00 €<br />
Life span<br />
Power consumption (in operation)<br />
4 years<br />
400 watts<br />
Hours per year 24 hr. x 30.5 days x 12 months = 8,784 hr.<br />
Power consumption per year 8.784 hr. x 0.4 kW = 3,513.6 kWh<br />
Power costs per kWh 0.15 €<br />
Power costs per year 3,513.6 kWh x 0.15 € = 527.04 €<br />
Power costs 1 year 4 years<br />
PUE = 3.0 527.04 € x 3 = 1,581.12 € 6,324.48 €<br />
PUE = 2.2 527.04 € x 2.2 = 1,159.49 € 4,637.95 €<br />
PUE = 1.6 527.04 € x 1.6 = 843.26 € 3,373.06 €<br />
Page 42 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
2.4.4. In-House or Outsourced <strong>Data</strong> <strong>Center</strong><br />
IT departments generally realize before the critical point in time has arrived, that something needs to be done<br />
about their data center situation. The company then faces the question of whether to outsource their data center,<br />
or only parts of it, or not to outsource at all.<br />
In the following sections it is assumed that the company decides against any outsourcing beyond housing and<br />
hosting services, leaving us with the following alternatives to discuss:<br />
• Owning the data center<br />
• Using housing or hosting services from a hosting provider (renting)<br />
• Mixture between owning and renting<br />
The arguments against outsourcing are mostly of a psychological or organizational nature or are security-related.<br />
Psychological Reasons<br />
A perceived loss of control is probably the strongest among the psychological reasons. Among the arguments<br />
stated are that response time to failure alerts would be longer and that it would not be entirely clear who had<br />
access to computers. In addition, IT departments are often under the impression that they would be become less<br />
important within the company if this core activity is outsourced.<br />
There are valid arguments to counter these concerns because the issues raised can be resolved by working out<br />
adequate procedures and contracts with data center operators.<br />
Organizational Reasons<br />
The most frequently cited organizational arguments against outsourcing include a lack of transportation options for<br />
IT staff, necessary coordination of internal and external staff, an inability to react immediately to hardware events,<br />
limited influence on scheduled maintenance work, etc. Actual access needs and all the tasks involved must be<br />
investigated and clearly defined in advance. Many data center operators employ technical staff on-site who can<br />
carry out simple hardware jobs on behalf of the customer. Experience shows that most of the work can be done<br />
remotely. If on-site presence is required more frequently, the company can opt for a local operator.<br />
With the number of customers in a data center growing, an individual customers’ influence on scheduled maintenance<br />
work decreases. This is where the availability level of the data center comes into play, because redundancy<br />
allows maintenance without interrupting operation.<br />
Security-Related Reasons<br />
A perceived lack of security is often cited as main argument against outsourcing a data center. When no regulations<br />
exist that stipulate data be processed on company premises, these arguments are easy to counter. Here<br />
are some examples:<br />
• <strong>Data</strong> center providers generally work in accordance with audited processes that define security in<br />
different areas.<br />
• The distance between the data center and company premises is a security factor.<br />
• The data centers are usually "invisible", meaning they are not signposted and, on customer request,<br />
operators guarantee anonymity.<br />
• It is easier to implement access limitations by external staff.<br />
It is recommended that these arguments be weighed, to assess them and incorporate them when calculating the<br />
budget. In terms of data center operators, it is important to acquire a comprehensive list of all the costs involved.<br />
The expenses listed for energy consumption often do not include power consumption for IT equipment (see also<br />
PUE).<br />
A popular option is combining a company-owned data center and renting space from a data center operator. This<br />
solution offers the following advantages:<br />
• Company backup concepts can be realized.<br />
• Capacity bottleneck situations can be compensated.<br />
• The required flexibility (on demand) can be defined by contract.<br />
• A higher security level is reached.<br />
The bottom line is that there is no universally applicable right decision. Each concept has its advantages and<br />
requirements that must be assessed from a company-specific point of view.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 43 of 156
www.datacenter.rdm.com<br />
2.5. Aspects of the Planning of a <strong>Data</strong> <strong>Center</strong><br />
What does the perfect data center look like Planning a data center involves a good deal of communication<br />
because there is no such thing as the perfect data center.<br />
Construction costs as well as operational costs of a data center are closely linked to the requirements placed on<br />
the data center. It is therefore advisable to pay particular attention to the analysis phase and the definition of<br />
requirements and concepts. Corporate strategy, requirements arising from IT governance and IT risk management<br />
must also be integrated.<br />
Whether it is the construction of a new data center, the extension of an existing one, the outsourcing of services<br />
(housing, hosting, managed services, cloud services) or yet another data center concept – the basic principle is<br />
always the same: there is no universal solution. A succinct analysis of the actual situation and the determination of<br />
requirements are indispensable prerequisites. A factor that cannot be underestimated is the definition of realistic<br />
performance parameters.<br />
Selection of the site for a data center is becoming increasingly subject to economic considerations and available<br />
resources. Two crucial factors are the power supply and IP integration. Then there are the corporate challenges of<br />
physical safety, i.e. potential hazards in the vicinity, risks due to natural disasters (flooding, storms, lightning,<br />
earthquakes) and also sabotage.<br />
2.5.1. External Planning Support<br />
We must emphasize that planners and consultants are often only referred to when the damage has already been<br />
done. Server parks, phone networks and complex information systems are realized without the help of specialized<br />
planners or professional consultants, and then when the facility is commissioned, deficiencies are often revealed<br />
that could have been avoided if professionals with comprehensive expertise had been consulted.<br />
Another important aspect is a consultant's or planner's experience in data center issues.<br />
Management members, IT and infrastructure managers (electrical and air-conditioning engineers, utilities<br />
engineers, architects, security experts, network specialists, etc.) quite often do not speak the same language –<br />
even within the same company. One essential prerequisite of a successful data center project, however, is a<br />
perfect flow of communication and the integration of all the areas concerned – one of the most important tasks of<br />
the project leader.<br />
Companies often hire external consultants for analysis, definition of requirements and location research. It is<br />
essential that these external consultants maintain very close communication with management, the IT department<br />
and infrastructure managers to allow them to establish company-specific requirements that are economically<br />
feasible.<br />
The tasks of a data center planner are:<br />
• Initial assessment of the facts<br />
• Planning (iterative process) – define system specifications<br />
• Call for tender<br />
• Tender evaluation<br />
• Supervision, quality assurance<br />
• Inspection and approval<br />
• Warranty points<br />
• Project management<br />
Analysis<br />
Supervision<br />
Initial<br />
assessment<br />
Inspection and<br />
acceptance<br />
Planning<br />
Warranty<br />
Call for tender<br />
Tender<br />
evaluation<br />
Project management<br />
In Switzerland, planning work is usually carried out in compliance with SIA guidelines (Swiss Engineer and<br />
Architects Association) and in Germany with the HOAI (Fees Regulations for Architects and Engineers).<br />
Page 44 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The occupation of data center planner does not have an officially defined profile!<br />
<strong>Data</strong> center planners are often engineers with a specific technical background who acquired work experience in<br />
data center environments. Due to the high demand, the group of data center planners is growing rapidly.<br />
However, their service portfolios vary and the selection and hiring of a planner depends on the client's ideas and<br />
requirements. These range between data center contractor (from planning to completion) to professional consultant<br />
for a subtask.<br />
The search term "data center planning" yields a wide range of service providers, such as:<br />
• Companies who have added data center planning and all aspects connected with it to their existing<br />
portfolio. Typical examples are IBM, HP, Emerson or Rittal. The advantage stated is the one-stopshopping<br />
approach.<br />
• Companies who have specialized in data center planning. In many cases these are experts in specific<br />
industrial areas, who subcontract other experts themselves if required. Some of these employ engineers<br />
from all technical fields required for an overall data center planning.<br />
• Building engineering consultants and engineering firms<br />
• Companies specialized in structured cabling systems<br />
• Suppliers of data center infrastructures (e.g. air-conditioning, fire protection, power supply)<br />
• Suppliers of passive network components<br />
• <strong>Data</strong> center operators<br />
• Consultancy firms<br />
• Power supply companies with Energy Performance Contracting (EPC)<br />
In many cases, potential clients – depending on in-house expert experience and skills – only hire experts for<br />
specific areas or consult a quality assurance professional.<br />
2.5.2. Further Considerations for Planning<br />
Topics of structured cabling systems and application-neutral communications cabling systems (see also EN 50173,<br />
EN 50173-5 and EN 50174) are not a high priority for data center planners.<br />
However, experts estimate that today as much as 70 % of all IT failures are caused by faults in the cabling system<br />
infrastructure.<br />
Since the introduction of Ethernet, the data rate has increased by a factor of 10 every 6 years. This generates an<br />
increased demand for bandwidth that a large number of networks installed around the world are not capable of<br />
transmitting.<br />
In network planning, however, many essential data center planning aspects need to be considered and implemented,<br />
namely:<br />
• Security, availability<br />
• Impact on energy consumption – energy efficiency<br />
• Redundancy<br />
• Installation and maintenance costs<br />
• Delivery times<br />
• Flexibility for changes<br />
• Productivity<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 45 of 156
www.datacenter.rdm.com<br />
3. <strong>Data</strong> <strong>Center</strong> Overview<br />
The purpose of this section is to provide a comprehensive overview of the relevant standards and prevailing<br />
technologies in data centers. It discusses data center layout, its overall infrastructure, zones and hierarchies. In<br />
addition to active components and network and virtualization technologies, this section primarily deals with transmission<br />
protocols and media in data centers. It also examines current and future LAN technologies as well as<br />
cabling architectures. This is because data center planners, operators and managers can successfully manage<br />
current and upcoming tasks only through a holistic view.<br />
3.1. Standards for <strong>Data</strong> <strong>Center</strong>s<br />
In earlier times, a room in which one or more servers, a telephone system and active network components were<br />
located was often called a computer room. However, because of the development of cloud computing, virtualization<br />
technologies, and an increased tendency to use outsourcing, “computer room” requirements became more<br />
complex. The one room was replaced by a room structure, whose individual rooms are assigned defined functions.<br />
The data center is now divided up into an<br />
entrance area, the actual computer room<br />
and a work area for administrators, right<br />
through to separate rooms for UPS batteries,<br />
emergency power generators and cooling.<br />
In addition, attention must be given to<br />
matters like access control, video monitoring<br />
and alarm systems. An emphasis on active<br />
and passive components requires an<br />
improved infrastructure that includes such<br />
equipment as a cooling system and power<br />
supply, which in turn affect installation and<br />
construction as well as the data cabling<br />
structure. The point-to-point connections<br />
that were used previously are being replaced<br />
by a structured cabling system, which<br />
allows for rapid restoration in case of a fault,<br />
effortless system expansion and easy administration.<br />
The data center layout, hierarchical structure and individual zones and their functions are described in detail in<br />
sections 3.2 and 3.3 below.<br />
3.1.1. Overview of Relevant Standards<br />
Increasing demands on data centers have led standardization bodies to take this development into account and to<br />
take a close look as this topic. Cabling standards issued from international and national committees describe the<br />
structure and characteristics of a cabling system almost identically, but differ in terms of content. There are three<br />
big organizations in the world which are concerned with the standardization of data centers.<br />
The ISO/IEC (International Organization for Standardization / International Electrotechnical Commission) develops<br />
international standards, the CENELC (Comité Européen de Normalisation Électrotechnique) European standards<br />
and the ANSI (American National Standards Institute) American standards. There are other organizations for the<br />
area of data centers, though these play a subordinate role to the ones above.<br />
The standardization bodies also remain in contact with one another in an attempt to avoid differing interpretations<br />
on the same topic, but this is not always successful. To achieve harmonization in Europe, CENELEC and ISO<br />
exchange information with national committees (SEV/SNV, VDE/DIN, ÖVE, etc.). <strong>Data</strong> center cabling is covered<br />
by the following standards:<br />
• ISO/IEC 24764 • EN 50173-5 • TIA-942<br />
All three standards focus on cabling, but describe the requirements for a data center in different ways. New<br />
standards were developed for this area so as not to change the general (generic) cabling structure all too greatly.<br />
The central focus of these standards is to cover the structure and performance of data center cabling. The goal<br />
was to move away from the general point-to-point connections normally used in data centers and construct a<br />
structure that is flexible, scalable, clear and permits changes, troubleshooting and expansion.<br />
Page 46 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Where do the differences lie between the different standards for a data center<br />
A “cabling planner” designing a generic building cabling system must consider space requirements, the<br />
connection for potential equalization and other factors, but in a data center the planner must also consider things<br />
like low space availability, air conditioning, high power consumption, redundancy and failure safety including<br />
access restriction.<br />
The type of the data center and the future prospect for transmission protocols and data rates have an effect on the<br />
development of standards. These requirements and the interfaces to other standards are examined differently,<br />
depending upon the standard. The most important points in the different standards are classified in the following<br />
table.<br />
Criteria ISO/IEC 24764 EN 50173-5 TIA-942<br />
Structure <br />
Cabling performance <br />
Redundancy <br />
Grounding/potential equalization IEC 60364-1 EN 50310 2<br />
Tier classification <br />
Cable routing IEC 14763-2 1 EN 50174-2 /A1 . 2<br />
Ceilings and double floors IEC 14763-2 1 EN 50174-2 /A1 6<br />
Floor load 2<br />
Space requirements (ceiling height, door width) IEC 14763-2 1 EN 50174-2 /A1 3 <br />
Power supply/UPS <br />
Fire protection/safety EN 50174-2 /A1 4 4<br />
Cooling <br />
Lighting <br />
Administration/labeling IEC 14763-1 5 EN 50174-2 /A1 5 1<br />
Temperature/humidity <br />
1 not data center-specific, 2 refers to TIA-607, 3 only door widths and ceiling height, 4 refers to local standards,<br />
5 refers to complexity level, 6 refers to TIA-569<br />
On first glance, we see that TIA-942 devotes much more attention to the subject of data centers than EN 50173-5<br />
or ISO/IEC 24764. Many important terms and their descriptions can be found in additional documents in ISO/IEC<br />
and EN, and are kept at a very general level. Some IEC documents were created a few years ago and are<br />
therefore not at the current state of technology. There are also differences in terminology in the central areas of<br />
this cabling standard.<br />
ISO terminology is used in this section since that is the international worldwide organization. ANSI terms are used<br />
for items for which the ISO has no corresponding terminology.<br />
3.1.2. ISO/IEC 24764<br />
The configuration and structure of an installation per ISO/IEC 24764 must be carried out in accordance with this<br />
standard and/or ISO 11801. Testing methods for copper cabling must follow IEC 61935-1 and for optical fiber<br />
cabling the IEC 14763-3 standard must be followed. In addition, a quality plan and installation guidelines in accordance<br />
with IEC 14763-2 must be observed for any compliant installation.<br />
Since IEC 14763-2 is an older document, ISO/IEC 18010 must be consulted for cable routing issues. This standard<br />
not only contains descriptions for cable routing, but also rudimentary instructions concerning failure safety.<br />
Nevertheless, neither IEC 14763-2 nor ISO/IEC 18010 is explicitly designed for data center requirements. The<br />
cabling system is arranged in a tree structure that starts out from a central distributor.<br />
Point-to-point connections must be avoided wherever possible. Exceptions are allowed as long as active devices<br />
are located very closely to one another or cannot communicate over the structured cabling system. Not included in<br />
this structure are cabling in a local distributor per ISO 11801 and cabling at the interface to external networks.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 47 of 156
www.datacenter.rdm.com<br />
ENI: External Network Interface MD: Main Distributor ZD: Zone Distributor<br />
LDP: Local Distribution Point EO: Equipment Outlet EQP: Equipment<br />
The data center layout, its individual zones and their functions are described in detail in section 3.2 below.<br />
Evolution of the basic computer room into a data center involves increased layout requirements. As mentioned at<br />
the start, it has become necessary to separate the different areas of the data center into rooms. It must be clearly<br />
indicated that the network structure in the data center is separate from building cabling. In addition, the connection<br />
to the internal network must be established over a separate distributor that is spatially detached from the data<br />
center. The interface to the external network (ENI) is created either within or outside of the data center. All other<br />
functional elements must exist in the data center permanently and be always accessible.<br />
Cabling components, patch cords and connection cables as well as distributors for the building’s own network are<br />
not components of the standard. As the table below shows, ISO 11801 has terminology that is different from that<br />
of ISO/IEC 24764.<br />
ISO/IEC 24764 ISO 11801<br />
External network interface (ENI) Campus distributor (CD)<br />
Network access cable<br />
Primary cable<br />
Main distributor (MD)<br />
Building distributor (BD)<br />
Main distributor cable<br />
Secondary cable<br />
Zone distributor (ZD)<br />
Floor distributor (FD)<br />
Area distributor cable<br />
Tertiary/horizontal cable<br />
Local distribution point (LDP) Consolidation point (CP)<br />
Local distribution point cable Consolidation point cable<br />
Equipment outlet (EO)<br />
Telecommunications outlet (TO)<br />
Different distributor functions<br />
may be combined, depending<br />
upon the size of the data center.<br />
However, at least one main<br />
distributor must exist.<br />
Redundancy is not explicitly<br />
prescribed, but is referred to<br />
under failure safety. The standard<br />
gives information on options for<br />
redundant connections, cable<br />
paths and distributors. Like the<br />
consolidation point, the local<br />
distribution point (LDP) is located<br />
in either the ceiling or in the<br />
raised floor. Patching is not permitted,<br />
since the connection<br />
must be wired through.<br />
ISO/IEC 24764 refers to ISO 11801 regarding performance. The properties described in detail in ISO 11801 apply<br />
for all transmission links or channels and components. The difference is that for the data center minimum requirements<br />
for main and area distributor cabling apply.<br />
A cabling class that satisfies the applications listed in ISO 11801 is permitted for the network access connection.<br />
Page 48 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Cabling Infrastructure:<br />
• Copper cables: Category 6 A, Category 7, Category 7 A<br />
• Copper transmission link or channel: Class E A, Class F, Class F A<br />
• Multi-mode optical fiber cables: OM3, OM4<br />
• Single-mode optical fiber cables: OS1, OS2<br />
Connectors for Copper Cabling:<br />
• IEC 60603-7-51 or IEC 60603-7-41 for Category 6 A for shielded and unshielded respectively<br />
• IEC 60603-7-7/71 for Category 7 and 7 A<br />
• IEC 61076-3-104 for Category 7 and 7 A<br />
Connectors for Optical Fibers:<br />
• LC duplex per IEC 61754-20 for GA and ENS<br />
• MPO/MTP ® multi-fiber connectors per IEC 61754-7 for 12 or 24 fibers<br />
It is important that the appropriate polarity be observed in multi-fiber connections so the transmitter and receiver<br />
can be correctly connected. ISO/IEC 24764 references ISO 14763-2 with regard to this. The polarity of multi-fiber<br />
connections is not described, due to the age of the document.<br />
3.1.3. EN 50173-5<br />
This document is essentially the same as ISO/IEC 24764, but has a few minor differences. The last changes were<br />
established through an amendment. Therefore EN 50173-5 as well as EN 50173-5/A1 are necessary for the<br />
complete overview. This also applies for all other standards in the EN 50173 series. EN 50173-5 also uses the<br />
hierarchical structure per EN 50173-1 & -2.<br />
A standard-compliant installation is achieved through both application of the EN 50173 series and application of<br />
the EN 50174 series and its amendments. In particular, an entire section about data centers has been added to<br />
EN 50174-2/A1. The following topics are covered in this section:<br />
• Design proposals<br />
• Cable routing<br />
• Separation of power and data cables<br />
• Raised floors and ceilings<br />
• Use of pre-assembled cables<br />
• Entrance room<br />
• Room requirements<br />
As opposed to ISO 11801, this standard uses optical fiber classes other OF-300, OF-500 und OF-2000; possible<br />
classes are OF-100 to OF-10000. Cabling performance is determined by EN 50173-1 and EN 50173-2. However,<br />
minimum requirements for data center cabling were also established in the standard. A cabling class that satisfies<br />
the applications listed in ISO 11801 is also permitted for the network access connection in EN 50173-5.<br />
Cabling Infrastructure:<br />
• Copper cable: Category 6 A, Category 7, Category 7 A<br />
• Copper transmission path: Class E A, Class F, Class F A<br />
• Multi-mode optical fiber cable: OM3, OM4<br />
• Single-mode optical fiber cable: OS1, OS2<br />
Plug Connectors for Copper Cabling:<br />
• IEC 60603-7-51 or IEC 60603-7-41 for Category 6 A for shielded and unshielded respectively<br />
• IEC 60603-7-7/71 for category 7 and 7 A<br />
• IEC 61076-3-104 for category 7 and 7 A<br />
Plug Connectors for Optical Fibers:<br />
• LC duplex per IEC 61754-20 for GA and ENS<br />
• MPO/MTP ® multi-fiber connectors per IEC 61754-7 for 12 or 24 fibers<br />
Though different options are listed for the use of multi-fiber connections, polarity is not covered in any of these<br />
standards. Detailed information on this appears in sections 3.10.1 and 3.10.2.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 49 of 156
www.datacenter.rdm.com<br />
3.1.4. TIA-942<br />
Without a doubt, the American standard is currently the most comprehensive body of standards for data centers.<br />
Not only is cabling described, but also the entire data center environment including the geographical location, as<br />
well as a discussion of seismic activities. Among other things, the data center is divided up into so-called “tiers”, or<br />
different classes of availability (see section 2.2.1). The data center must satisfy specific requirements for each<br />
class in order to guarantee availability.<br />
What differences from ISO/IEC exist, apart from cabling performance<br />
• The interface to the external network may not be in the computer room with TIA-942.<br />
• The maximum length of horizontal cabling may not exceed 300 meters, even with glass fibers.<br />
• No minimum requirements are provided for cabling.<br />
• The limits for Category 6A (≠ 6 A) are less strict than those of ISO/IEC 24764 or EN 50173-5.<br />
The smallest common denominator of the data center standards listed is cabling. TIA-942 references the EIA/TIA-<br />
568 series in the area of performance. TIA-942 itself is divided into three sections; TIA-942, TIA-942-1 and TIA-<br />
942-2. All data center details are covered in the main standard. The use of coaxial cables is covered in TIA-942-1,<br />
and the latest version of the Cat6A (copper) performance class, laser-optimized fibers (OM3 and higher), changed<br />
temperature and ambient humidity as well as other changes in content are introduced in TIA-942-2. Cabling is<br />
defined as follows in TIA.<br />
Cabling Infrastructure:<br />
• Copper cable: Category 3 – 6A<br />
• Copper transmission path: Category 3 – Category 6A (Category 6A is recommended, TIA doesn’t use classes)<br />
• Multi-mode optical fiber cable: OM1, OM2, OM3 (OM3 recommended)<br />
• Single-mode optical fiber cable: 9/125 µm<br />
• Coaxial cables: 75 ohm<br />
Connectors for Copper Cabling:<br />
• ANSI/TIA/EIA-568-B.2<br />
Plug Connectors for Optical Fibers:<br />
• SC duplex (568 SC) per TIA 604-3<br />
• MPO/MTP ® multi-fiber connector per TIA 604-5 (only 12-fiber)<br />
• Other duplex connectors like LC may be used.<br />
A new version of TIA-942 (TIA-942A) was in the planning stages at the time this document was published. Since<br />
this change includes important areas, a preview of the new version is meaningful.<br />
• Combining TIA-942, TIA-942-1 and TIA-942-2 into one document<br />
• Harmonization with TIA-568.C0, TIA-568.C2, TIA-568.C3<br />
• Grounding is removed and integrated with TIA-607-B<br />
• Administration is integrated with TIA-606-B<br />
• Distribution cabinets and separation of data and power cables is integrated with TIA-569-C<br />
• The limit restriction for glass fiber paths in horizontal cabling is rescinded.<br />
• Category 3 and Category 5 are no longer allowed for horizontal cabling.<br />
• The glass fiber for backbone cabling is OM3 and OM4. OM1 and OM2 are no longer allowed.<br />
• LC and MPO plugs are a mandatory requirement for the data center.<br />
• Adaptation of ISO/IEC 24764 terminology<br />
Combining Bodies of Standards<br />
None of the bodies of standards is complete. A body will either omit an examination of the overall data center, or<br />
its permitted performance class will not allow for scaling to higher data rates. It is therefore recommended that all<br />
relevant standards be consulted when planning a data center. The best standard should be used, depending upon<br />
specific requirements. One example of this would be to use the cabling performance capability from the standard<br />
which provides the highest performance requirement for cabling components. In the case of other properties,<br />
other standards should be used if they handle these properties in a better way.<br />
Recommendation:<br />
A specification of the standards used must therefore be listed in the customer requirement specification in order to<br />
avoid ambiguous requirements!<br />
Page 50 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.2. <strong>Data</strong> <strong>Center</strong> Layout<br />
In spite of the virtualization trends currently in the software market – keyword server consolidation – there is a high<br />
demand on the horizon for renewing data center and server rooms. It is estimated that over 85 percent of all data<br />
centers in Germany are over ten years old. A study by Actionable Research that was recently commissioned by<br />
Avocent, is also examining the global situation in this area. Approximately 35 percent of all data centers worldwide<br />
are therefore currently undergoing a redesign.<br />
3.2.1. Standard Requirements<br />
Even if small differences between generic and data center cabling are uncovered between the American TIA-942<br />
standard and international and European standards, these bodies of standards essentially differ only in how they<br />
name functional elements.<br />
ISO/IEC 24764 ISO 11801 TIA-942<br />
External Network Interface (ENI)<br />
[Externe Netzschnittstelle (ENS)]<br />
Campus Distributor (CD)<br />
[Standortverteiler (SV)]<br />
ENI (External Network Interface)<br />
ER (Entrance Room)<br />
Network Access Cable<br />
[Netzzugangkabel]<br />
Primary Cable [Primärkabel]<br />
Backbone Cabling<br />
Main Distributor (MD)<br />
[Hauptverteilter (HV)]<br />
Building Distributor (BD)<br />
[Gebäudeverteiler (GV) ]<br />
MC (Main Cross Connect)<br />
MDA (Main Distribution Area)<br />
Main Distributor Cable<br />
[Hauptverteilerkabel]<br />
Secondary Cable [Sekundärkabel ] Backbone Cabling<br />
Zone Distributor (ZD)<br />
[Bereichsverteiler (BV)]<br />
Floor Distributor (FD) [Etagenverteiler<br />
(EV)]<br />
HC (Horizontal Cross Connect)<br />
HDA (Horizontal Distribution Area)<br />
Zone Distributor Cable<br />
Tertiary/Horizontal Cable [Tertiär-<br />
[Bereichsverteilungskabel] /Horizontalkabel]<br />
Horizontal Cabling<br />
Local Distribution Point (LDP)<br />
[Lokaler Verteilpunkt (LVP)]<br />
Consolidation Point (CP) [Sammelpunkt<br />
(SP)]<br />
CP* (Consolidation Point)<br />
ZDA (Zone Distribution Area)<br />
Local Distribution Point Cable Consolidation Point Cable<br />
[Lokales Verteilpunktkabel] [Sammelpunktkabel]<br />
Horizontal Cabling<br />
Equipment Outlet (EO)<br />
[Geräteanschluss (GA)]<br />
Telecommunications Outlet (TO)]<br />
[Informationstechnischer Anschluss (TA)<br />
EO (Equipment Outlet)<br />
EDA (Equipment Distribution Area)<br />
Unique assignment exists of functional elements to rooms, though in the TIA-942 it is data center areas rather<br />
than the functional elements that are at the forefront. Given a detailed examination, we see that this topology is<br />
the same for ISO/IEC 24764 and EN 50173-5.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 51 of 156
www.datacenter.rdm.com<br />
Individual areas in the data center are therefore assigned specific functions.<br />
• Entrance Room (External Network Interface): This is the entrance area to the network in the data center,<br />
which provides for access to the public network (the Internet provider) and can be connected multiple<br />
times, depending upon the “tier” level that is selected. In smaller networks, the External Network Interface<br />
can be connected directly to the Horizontal Distribution Area (Zone Distributor).<br />
• Main Distribution Area (Main Distributor): This area represents the core of the data center and therefore<br />
forms the “lifeblood” of the network, which is why the redundant connections and components just in this<br />
area alone are of crucial importance. All data traffic in the backbone is therefore controlled here, which is<br />
why this point in the network is known as the Core Layer. However, the Aggregation Layer (or<br />
Distribution Layer), whose aggregation/distribution switches bundle and forward Access Layer data traffic<br />
to the core, is also located in this area.<br />
• Horizontal Distribution Area (Zone Distributor): In this “interface” between backbone and horizontal<br />
cabling, the data traffic of the access switches which control the data exchange with terminal devices is<br />
“passed over” to the aggregation layer. This area in the network is known as the Access Layer.<br />
• Zone Distribution Area (Local Distribution Point): This area is for “interim distribution” to the Equipment<br />
Distribution Area, which can be used for reasons of space and is placed in the raised floor, for example.<br />
The Raised Floor Solution from R&M that was developed for this purpose provides the ideal option for this<br />
process, since it allows up to 288 connections per box and is freely configurable, thanks to its modular<br />
design.<br />
• Telecom Room: This is the location where the connection to the internal network is located.<br />
• The Operation <strong>Center</strong>, Support Room and Offices are rooms for data center personnel.<br />
Backbone cabling is preferably laid with fiber optic cables, and horizontal cabling with copper cables. The<br />
transmission media recommended for this purpose is detailed in section 3.9. Possible transmission protocols and<br />
the associated maximum transmission rates (see section 3.8) are defined at the same time that cable types are<br />
selected, which is why this is a significant decision which determines the future viability of the data center.<br />
An equally important factor in data center scalability is determination of the cabling architecture, which in turn<br />
influences network availability and determines rack arrangement. Different cabling architectures such as “End of<br />
Row”, “Top of Rack”, etc., as well as their advantages and disadvantages are listed under section 3.4.<br />
Additional designs for the physical infrastructure of a data center are described in section 3.5.<br />
3.2.2. Room Concepts<br />
Different room concepts exist with regard to data center layout.<br />
The classical Room-in-Room Concept with separate technical and IT security rooms that house any type and<br />
number of server racks and network cabinets is equipped with raised floors, dropped ceilings if necessary, active<br />
and passive fire protection and a cooling system.<br />
The large room container is a Modular<br />
Container Concept with separate units for<br />
air conditioning and energy, as well as an IT<br />
container to house server, storage and<br />
network devices.<br />
The Self-Sufficient Outdoor <strong>Data</strong> <strong>Center</strong>,<br />
with its own block heating and generating<br />
plant, also provides a transportable data<br />
center infrastructure, but is independent from<br />
the external energy supply. The associated<br />
power plant provides for the supply of both<br />
energy as well as cold water, through an<br />
absorption unit.<br />
The redundant, automated Compact <strong>Data</strong> <strong>Center</strong> represents an entire, completely redundant data center which<br />
includes infrastructure and high-performance servers in one housing. Due to its high power density, this “Mini <strong>Data</strong><br />
<strong>Center</strong>” is also a suitable platform for private cloud computing.<br />
Page 52 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.2.3. Security Zones<br />
Information technology security is a broad term which includes logical data security, physical system security, and<br />
organizational process security. The goal of a comprehensive security concept is to examine all areas, detect and<br />
assess risks early on and take measures so that a company’s competitive ability on the market is not at risk.<br />
When a company’s IT infrastructure and different IT functional areas are taken into consideration, a well thoughtout<br />
design can reduce or even eliminate significant physical security risks. Both the locations of IT areas and the<br />
spatial assignment of different functions together play a decisive role in this process.<br />
Functional Areas<br />
Designing an IT infrastructure and therefore selecting the location of a data center are based on a company’s<br />
specific data security concept, which reflects its requirements of availability and the direction of corporate policy.<br />
The following criteria should be examined when considering the physical security of a data center location:<br />
• Low potential of danger through neighboring uses, adjacent areas or functions<br />
• Avoidance of risks through media and supply lines, tremors, chemicals, etc. which may impair the physical<br />
security of IT systems<br />
• Prevention of possible dangers through natural hazards (water, storms, lightning, earthquakes) – assessment<br />
of the characteristics of a region<br />
• The data center as a separate, independent functional area<br />
• Protection from sabotage via a “protected” location<br />
• An assessment of the danger potential that is based on the social position of the company<br />
If all risk factors and basic company-specific conditions are taken into consideration, not only can dangers be<br />
eliminated in advance during the conception process for the IT infrastructure, but expenditures and costs can also<br />
be avoided.<br />
When designing and planning a data center, its different functional areas are arranged in accordance with their<br />
requirements for security and their importance to the data center’s functional IT integrity.<br />
The different functional areas can be divided up as follows:<br />
Security Zones<br />
Function<br />
1 Site white<br />
2<br />
3<br />
4<br />
5<br />
Semi-public area, adjacent<br />
office spaces<br />
Operating areas, auxiliary<br />
rooms for IT<br />
Technical systems for IT<br />
operation<br />
IT and network<br />
infrastructure<br />
Marking<br />
(example)<br />
green<br />
yellow<br />
blue<br />
red<br />
Arrangement of Security Zones<br />
The image above is one example that results when different security zones are shown schematically: The IT area<br />
(red) is located on the inside and is protected by its adjacent zones 3 and 4 (yellow/blue). Security zones 1 and 2<br />
(white/green) form the outer layer.<br />
Separating functional areas provides for limited possibilities for accessing sensitive areas, so possible sabotage is<br />
prevented. This ensures, for example, that a maintenance technician for air conditioning systems or the UPS only<br />
has access to the technical areas (blue) of the company and not to the IT area (red).<br />
The locations of the different functional areas as well as the division of security zones, or security lines, are key to<br />
ensuring the security of the IT infrastructure. However, continuous IT availability can be realized only within the<br />
overall context of a comprehensive security concept that considers all IT security areas.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 53 of 156
www.datacenter.rdm.com<br />
3.3. Network Hierarchy<br />
A hierarchical network design subdivides the network into discrete layers. Each of these layers provides specific<br />
functions that define its role within the overall network. When the different functions provided in a network are<br />
made separate, the network design becomes modular and this also results in optimal scalability and performance.<br />
As compared to other network designs, a hierarchical network is easier to administrate and to expand, and problems<br />
can be solved more quickly.<br />
3.3.1. Three Tier Network<br />
Three Tier Networks consist of an Access Layer with switches to desktops, servers and storage resources; an<br />
Aggregation/Distribution Layer in which switching centers combine and protect (e.g. via firewalls) the data streams<br />
forwarded from the Access Layer, and the Core Switch Layer, which regulates the traffic in the backbone.<br />
Core<br />
Aggregation<br />
Access<br />
Storage<br />
FO<br />
Copper<br />
Backbone<br />
LAN<br />
Distribution<br />
Switches<br />
SAN<br />
Switches<br />
LAN<br />
Switches<br />
Storage<br />
Servers<br />
Three Tier Networks resulted from<br />
Two Tier Networks, toward the<br />
end of the 1990s. These Two Tier<br />
networks were pushing their capacity<br />
limits. The bottleneck that was<br />
created was able to be rectified by<br />
using an additional aggregation<br />
layer. From a technical point of<br />
view, the addition of a third layer<br />
was therefore a cost-effective,<br />
temporary solution for resolving<br />
the performance problems of that<br />
time.<br />
Both network architectures are<br />
governed by the Spanning Tree<br />
Protocol (STP). This method was<br />
developed in 1985 by Radia<br />
Perlman, and determines how<br />
switching traffic in the network<br />
behaves. However, after 25 years,<br />
the STP is pushing its limits. The<br />
IETF (Internet Engineering Task<br />
Force) therefore intends to<br />
replace STP by the Trill Protocol<br />
(Transparent Inter-connection of<br />
Lots of Links) internalized by the<br />
STP.<br />
Network protocols for redundant<br />
paths are described in more detail<br />
in section 3.8.6.<br />
Trends such as virtualization and<br />
the associated infrastructure<br />
standardization represent a new<br />
challenge for these Three Tier<br />
Networks.<br />
Rampant server virtualization is making for increasing complexity. For example, ten virtual servers can operate on<br />
one physical computer, and can also be moved as needed from hardware to hardware, in a very flexible manner.<br />
So if a network previously had to manage the data traffic of 1,000 servers, this now becomes 10,000 virtual<br />
machines, which are still in motion to make matters worse.<br />
Classic networks built on three tiers, which dominated data centers from the late 90s, have been having more and<br />
more trouble with this complexity. The call is becoming increasingly loud for a flat architecture, in which the<br />
network resembles a fabric of nodes with equal rights. Instead of just point-to-point connections, crossconnections<br />
between nodes are also possible and increase the performance of this type of network.<br />
Page 54 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.3.2. Access Layer<br />
The Access Layer creates the connection to terminal devices such as PCs, printers and IP phones, and arranges<br />
access to the remainder of the network. This layer can include routers, switches, bridges, hubs and wireless<br />
access points for wireless LAN. The primary purpose of the Access Layer is to provide a mechanism which<br />
controls how devices are connected to the network and which devices are allowed to communicate in the network.<br />
For this reason, LAN switches in this layer must support features like port security, VLANs, Power over Ethernet<br />
(PoE) and other necessary functionalities. Switches in this layer are often also known as Edge Switches.<br />
Multiple access switches can often be stacked with one another into one virtual switch, which can then be<br />
managed and configured through one master device, since the stack is managed as one object.<br />
Manufacturers sometimes also offer complete sets of security features for connection and access control. One<br />
example of this is a user authentication system at the switch port in accordance with IEEE802.1x, so only<br />
authorized users have access to the network, while malicious data traffic is prevented from spreading.<br />
New Low Latency Switches specifically address the need for I/O consolidation in the Access Layer, for example in<br />
densely packed server racks. Cabling and management are simplified dramatically and power consumption and<br />
running costs drop while availability increases. These switches already support Fibre Channel over Ethernet<br />
(FCoE) and thus the construction of a Unified Fabric. Unified Fabric combines server and storage networks into<br />
one common platform that can be uniformly administered, which paves the way for extensive virtualization of all<br />
services and resources in the data center.<br />
Product example from Cisco<br />
3.3.3. Aggregation / Distribution Layer<br />
The Aggregation Layer, also known as the Distribution Layer, combines the data that was received from the<br />
switches in the Access Layer before they are forwarded on to the Core Layer to be routed to the final receiver.<br />
This layer controls the flow of network data with the help of guidelines and establishes the broadcast domains by<br />
carrying out the routing functions between VLANs (virtual LANs) that are defined in the Access Layer. This routing<br />
typically occurs in this layer, since Aggregation Switches have higher processing capacities than switches in the<br />
Access Layer. Aggregation Switches therefore relieve Core Switches of the need to carry out this routing function.<br />
Core Switches are already used to capacity for forwarding on very large volumes of data. The data on a switch<br />
can be subdivided into separate sub-networks through the use of VLANs. For example, data in a university can be<br />
subdivided by addressee into data for individual departments, for students and for visitors.<br />
In addition, Aggregation Switches also support ACLs (Access Control Lists) which are able to control how data are<br />
transported through the network. An ACL allows the switch to reject specific data types and to accept others. This<br />
way they can control which network devices may communicate in the network. The use of ACLs is processorintensive,<br />
since the switch must examine each individual packet to determine whether it corresponds to one of the<br />
ACL rules defined for the switch. This examination is carried out in the Aggregation Layer, since the switches in<br />
that layer generally have processing capacities that can manage the additional load. Aggregation Switches are<br />
usually high-performance devices that offer high availability and redundancy in order to ensure network reliability.<br />
Besides, the use of ACLs in this layer is comparatively simple. So instead of using ACLs on every Access Switch<br />
in the network, they are configured on Aggregation Switches, of which there are fewer, making ACL administration<br />
significantly easier.<br />
Combining lines contributes to the avoidance of bottlenecks, which is high importantly in this layer. If multiple<br />
switch ports are combined, data throughput can then be multiplied by their number (e.g. 8 x 10 Gbit/s = 80 Gbit/s).<br />
The term Link Aggregation is used in accordance with IEEE 802.1ad to describe switch ports that are combined<br />
(parallel switching), a process called EtherChannel by Cisco and also trunking by other providers.<br />
Due to the functions they provide, Aggregation Switches are severely loaded by the network. It is important to<br />
emphasize that these switches support redundancy, for purposes of their availability. The loss of one Aggregation<br />
Switch can have significant repercussions on the rest of the network, since all data from the Access Layer are<br />
forwarded to these switches. Aggregation Switches are therefore generally implemented in pairs so as to ensure<br />
their availability. In addition, it is recommended that switches be used here that support redundant network<br />
components that can be replaced during continuous operation.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 55 of 156
www.datacenter.rdm.com<br />
Aggregation Switches must finally also support Quality of Service (QoS) in order to uphold prioritization of the data<br />
coming from the Access Switches, if QoS is also implemented on them. Priority guidelines ensure that audio and<br />
video communication are guaranteed to receive enough bandwidth so an acceptable quality of service is ensured.<br />
All switches which forward on data of this type must support QoS so as to ensure prioritization of voice data in the<br />
entire network.<br />
Product example from Cisco<br />
3.3.4. Core Layer / Backbone<br />
The Core Layer in the hierarchical design is the high-speed “backbone” of the network. This layer is critical for<br />
connectivity between the devices in the Aggregation Layer, which means that devices in this layer must also have<br />
high availability and be laid out redundantly. The core area can also be connected with Internet resources. It<br />
combines the data from all aggregation devices and must therefore be able to quickly forward large volumes of data.<br />
It is not unusual to implement a reduced core model in smaller networks. In this case, the Aggregation and Core<br />
Layers are combined into one layer.<br />
Core Switches are responsible for handling most data in a switched LAN. As a result, the forwarding rate is one of<br />
the most important criteria when selecting these devices. Naturally, the Core Switch must also support Link<br />
Aggregation so as to ensure that Aggregation Switches have sufficient bandwidth available.<br />
Due to the severe load placed on Core Switches, they have a tendency to generate higher process heat than<br />
Access or Aggregation Switches; therefore one must make certain that sufficient cooling is provided for them.<br />
Many Core Switches offer the ability to replace fans without<br />
having to turn off the switch.<br />
Product example from Cisco<br />
QoS (Quality of Service) is also an essential service component for Core Switches. Internet<br />
providers (who provide IP, data storage, e-mail and other services) and company WANs (Wide<br />
Area Networks), for example, have contributed significantly to a rise in the volume of voice and<br />
video. Company- and time-critical data such as voice data should receive higher QoS guarantees<br />
in the network core and at the edges of the network than data that is less critical, such as<br />
files that are being transferred or e-mail.<br />
Since access to high-speed WANs is often prohibitively expensive, additional bandwidth in the Core Layer may<br />
not be the solution. Since QoS makes a software-based solution possible for data prioritization, Core Switches<br />
can then provide a cost-effective option for using existing bandwidth in an optimal and diverse manner.<br />
Page 56 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.3.5. Advantages of Hierarchical Networks<br />
A number of advantages are associated with hierarchical network designs:<br />
• Scalability<br />
• Performance<br />
• Ease of administration<br />
• Redundancy<br />
• Security<br />
• Maintainability<br />
Since hierarchical networks are by nature scalable in an easy, modular fashion, they are also very maintainable.<br />
Maintenance for other network topologies becomes increasingly complicated as the network becomes larger. In<br />
addition, a fixed boundary for network growth exists for certain network design models, and if this boundary value<br />
is exceeded, maintenance becomes too complicated and too expensive.<br />
In the hierarchical design model, switching functions are defined for every layer, simplifying the selection of the<br />
correct switch. This does not mean that adding switches to a layer does not lead to a bottleneck in another layer,<br />
or that some other restriction may occur.<br />
All switches in a fully meshed topology must be high-performance devices so the topology can achieve its<br />
maximum performance. Every switch must be capable of performing all network functions. By contrast, switching<br />
functions in a hierarchical model are differentiated by layer. So in contrast to the Aggregation Layer and Core<br />
Layer, more cost-effective switches can be used in the Access Layer.<br />
Before designing the network, the diameter of the network must be first<br />
examined. Although a diameter is traditionally specified as a length value, in<br />
the case of network technology this parameter for the size of a network must<br />
be measured via the number of devices. So network diameter refers to the<br />
number of devices that a data packet must pass in order to reach its<br />
recipient. Small networks can therefore ensure a lower, predictable latency<br />
between devices. Latency refers to the time a network device requires to<br />
process a packet or a frame.<br />
Each switch in the network must specify the destination MAC address of the<br />
frame, look it up in its MAC address table and then forward the frame over<br />
the corresponding port. If an Ethernet frame must pass a number of switches,<br />
latencies add up even when the entire process only lasts a fraction of<br />
a second.<br />
Very large volumes of data are generally transferred over “data center” switches, since these can handle communication<br />
for both server/server data as well as client/server data. For this reason, switches that are provided for<br />
data centers offer higher performance than those provided for terminal devices. Active network components and<br />
their additional functionalities are covered in section 3.6.<br />
<strong>Data</strong> centers are geared to tasks and requirements, which include mass storage in addition to arithmetic operations.<br />
Networks for these data centers are high- and maximum-performance networks in which data transfer rates<br />
in the gigabit range can be achieved. Various high-speed technologies are therefore used in data center<br />
architectures, and data rates are increased using aggregation. Memory traffic (SAN) is handled over Fibre<br />
Channel (FC), client-server communication over Ethernet and server-server communication for example over<br />
InfiniBand. These different network concepts are being increasingly replaced by 10 gigabit Ethernet. The advantage<br />
of this technology is that a 40 Gbit/s version also exists, as well as 100 gigabit Ethernet. Therefore 40/100 gigabit<br />
Ethernet is also suited for the Core Layer and Aggregation Layer in data center architectures, and for the use of<br />
ToR switches.<br />
Different Ethernet versions are listed in section 3.8.2 and Ethernet migration paths in section 3.10.2.<br />
3.4. Cabling Architecture in the <strong>Data</strong> <strong>Center</strong><br />
The placement of network components in the data center is an area that has high potential for optimization. Socalled<br />
ToR (Top of Rack) concepts have advantages and disadvantages as compared to EoR/MoR (End of<br />
Row/Middle of Row) concepts. One advantage of ToR lies in its effective cabling with short paths up to the server,<br />
whereas one disadvantage is its early aggregation and potential “overbooking” in the uplink direction. This is a<br />
point in favor of EoR concepts, which use chassis with high port densities and serve as a central fabric for all<br />
servers. One will usually find a mix which aggregates 1-Gbit/s connections to 10-Gbit/s in the ToR area, which is<br />
then combined with a direct 10-Gbit/s server connection on an EoR system.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 57 of 156
www.datacenter.rdm.com<br />
The different cabling architectures as well as their advantages and disadvantages are described below. All servers<br />
used in the concept are connected via 3 connections each. For redundancy 5 connections are provided (+1 for<br />
LAN, +1 for SAN).<br />
• LAN: Ethernet over copper or fiber optic cables (1/10/40/100 gigabit)<br />
• SAN: Fibre Channel (FC) over fiber optic cables or<br />
Fibre Channel over Ethernet (FCoE) over fiber optic or copper cables<br />
• KVM: Keyboard/video/mouse signal transmission over copper cables<br />
KVM switches (keyboard/video/mouse) are available starting from stand-alone solutions for 8 to 32 servers up to<br />
complex multi-user systems for data center applications of up to 2,048 computers. The location of the computers<br />
is irrelevant. These computers can therefore be accessed and administered locally or over TCP/IP networks –<br />
worldwide!<br />
3.4.1. Top of Rack (ToR)<br />
Top of Rack (ToR) is a networking concept that was developed specifically for virtualized data centers. As follows<br />
from its name, a ToR switch or rack switch (functioning as an Access Switch) is installed into the top of the rack so<br />
as to simplify the cabling to individual servers (patch cords). The ToR concept is suited for high-performance<br />
blade servers and supports migration up to 10 gigabit Ethernet. The ToR architecture implements high-speed<br />
connections between servers, storage systems and other devices in the data center.<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
........ ........<br />
Aggregation<br />
& Core<br />
SAN<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
ToR switches have slots for transceiver modules, enabling an optimal and cost-effective adaptation to 10 gigabit<br />
Ethernet or 40-/100 gigabit Ethernet. The port densities that can be achieved in this way depend on the transceiver<br />
module that is used. Thus, a ToR switch on one height unit can have 48 ports with SFP+ modules or 44<br />
ports with QSFP modules.<br />
The non-blocking data throughput of ToR switches can be one terabit per second (Tbit/s) or higher. Different<br />
transceiver modules are described in section 3.6.4.<br />
Page 58 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
New ToR switches sometimes support <strong>Data</strong> <strong>Center</strong> Bridging (DCB), and therefore Lossless Ethernet and<br />
Converged Enhanced Ethernet (CEE). The operation and significance of <strong>Data</strong> <strong>Center</strong> Bridging is explained in<br />
section 3.8.7.<br />
If one limits the ToR<br />
concept to LAN<br />
communication, the<br />
result is the cabling<br />
architecture shown on<br />
the right side, where the<br />
SAN and KVM<br />
connections are designned<br />
as in EoR<br />
architecture and an<br />
additional rack is therefore<br />
required to house<br />
them. Additional space<br />
for other servers is<br />
provided for this purpose<br />
in individual racks.<br />
........<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Aggregation<br />
& Core<br />
PATCH<br />
SAN<br />
The port assignment of<br />
SAN switches is optimized,<br />
which can mean<br />
that fewer SAN switches<br />
are needed.<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Redunancy<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
The Cisco version, as<br />
also shown in the<br />
image on the right side,<br />
represents another<br />
example of a ToR<br />
architecture. LAN fabric<br />
extenders are used<br />
here in place of ToR<br />
switches and can also<br />
support Fibre Channel<br />
over Ethernet (FCoE) in<br />
addition to 10 gigabit<br />
Ethernet connectivity.<br />
By using these extenders<br />
of the Unified<br />
Computing System architecture<br />
(UCS), Cisco<br />
provides standardization<br />
in the LAN/SAN networks<br />
also known as<br />
I/O consolidation.<br />
........<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Uplink<br />
PATCH<br />
Aggregation<br />
& Core<br />
PATCH<br />
SAN<br />
LAN<br />
Fabric<br />
Extender<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
Though these various ToR concepts appear to be different, they nevertheless have common features.<br />
Advantages:<br />
• Smaller cable volume (space savings in<br />
horizontal cabling and reduced installation<br />
costs)<br />
• Suitable for high server density (blade<br />
servers)<br />
• Easy to add additional servers<br />
Disadvantages:<br />
• No optimal assignment of LAN ports (no efficient use<br />
of all switch ports, possibly unnecessary additional<br />
switches)<br />
• Inflexible relation between Access and Aggregation,<br />
which leads to problems with increased server performance<br />
and aggregation of blade servers, as well as<br />
the introduction of 100 GbE<br />
>>> scalability, future-proof!<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 59 of 156
www.datacenter.rdm.com<br />
3.4.2. End of Row (EoR) / Dual End of Row<br />
In an End of Row architecture, a group of server cabinets is supplied by one or more switches. <strong>Data</strong> cables are<br />
installed in a star pattern from the switches to the individual server cabinets. This is generally 3 cables (LAN, SAN,<br />
KVM) for each server, or 5 for redundant LAN & SAN connections. For 32 1U servers, this would then require at<br />
least 96 data cables per cabinet.<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
........<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Aggregation<br />
& Core<br />
SAN<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
Since 2010, the performance of virtualized servers has increased dramatically through the systematic introduction<br />
of SR-IOV (Single Root I/O Virtualization) by leading processor manufacturers. Up to that point, data throughput in<br />
virtualized systems was limited to 3 – 4 Gbit/s, and even less than 1 Gbit/s during continuous operation. A doubled<br />
1 GbE connection was therefore the right solution.<br />
The modern alternative is to directly connect servers to high-performance aggregation switches. This solution is<br />
possible because I/O performance in a virtualized system is currently about 20 – 30 Gbit/s, since the hypervisor<br />
(the virtualization software which creates an environment for virtual machines) is freed up from tasks of switching<br />
wherever possible, through SR-IOV and a suitable piece of hardware. There are server blades where, in order to<br />
reach their full potential, a single blade must be connected doubly to 10 GbE. Blade systems that have a total of<br />
up to 8 blade servers must therefore be connected to 100 GbE.<br />
Blade system manufacturers report they are currently working on improving inner aggregation, so a 100 GbE<br />
connection is expected to be offered by 2011. A conventional ToR structure design would therefore be completely<br />
overloaded.<br />
In addition, no cost-effective switches are available on the market which provide the necessary flexibility and can<br />
be equipped with new, high-quality control functions such as DCB, which only make sense when they work endto-end.<br />
An EoR structure therefore has the following features.<br />
Advantages:<br />
Disadvantages:<br />
• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling<br />
• Optimal LAN port assignment (efficient) • Many cable patchings for EoR switches (server and<br />
• Concentration of Access Switches<br />
uplink ports)<br />
(simplified moves/adds/changes)<br />
• Space optimization in racks for server<br />
expansion<br />
Page 60 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Dual End of Row<br />
The Dual End of Row variant is attractive for reasons of redundancy, since the concentration of Access Switches<br />
is split up on both ends of the cabinet row and the corresponding cable paths are also separated as a result. Apart<br />
from that, the solution corresponds to the EoR principle.<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
........<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Aggregation<br />
& Core<br />
SAN<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Redunancy<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
The following features are provided with this cabling architecture, in addition to the redundancy option.<br />
Advantages:<br />
Disadvantages:<br />
• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling<br />
• Relatively good LAN port assignment (efficient) • Many cable patchings for EoR switches (server and<br />
• Concentration of Access Switches in 2 places<br />
uplink ports), but ½ of EoR<br />
(simplified moves/adds/changes) • Longer rows of cabinets if necessary (+1 Rack)<br />
• Space optimization, allowing for server expansion<br />
3.4.3. Middle of Row (MoR)<br />
In MoR concepts, the concentration<br />
points containing the Access<br />
Switches are not located at the ends,<br />
but centrally within the rows of racks.<br />
Advantages: (similar to EoR)<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
• Flexible, scalable solution<br />
• Optimal port assignment<br />
• Easy MAC<br />
• Easier server expansion<br />
• Shorter distances to EoR<br />
........<br />
........<br />
Aggregation<br />
& Core<br />
SAN<br />
Disadvantages: (like EoR)<br />
• Greater cable volume in<br />
horizontal cabling<br />
• Many cable patchings for MoR<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 61 of 156
www.datacenter.rdm.com<br />
3.4.4. Two Row Switching<br />
The terminal devices (server and storage systems) in this concept as well as network components (switches) are<br />
housed in separate cabinet rows, which can be a sensible option in smaller data centers.<br />
Aggregation<br />
& Core<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Switches<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Application Server or<br />
Storage Library<br />
SAN Switch<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Redunancy<br />
Uplink<br />
LAN Switch<br />
KVM Switch<br />
This cabling structure, which is comparable to the Dual End of Row concept, stands out because of the following<br />
features.<br />
Advantages: (similar to EoR)<br />
Disadvantages: (like EoR)<br />
• Flexible, scalable solution (future-proof) • Greater cable volume in horizontal cabling<br />
• Relatively good LAN port assignment (efficient) • Many cable patchings for EoR switches (server and<br />
• Concentration of Access Switches in 2 places<br />
uplink ports), but ½ of EoR<br />
(simplified moves/adds/changes)<br />
• Space optimization in racks, allowing for server<br />
expansion<br />
• Shorter distances in backbone<br />
3.4.5. Other Variants<br />
Other possible variants on cabling structures are listed below. As mentioned at the start, combinations or hybrid<br />
forms of the described cabling architectures are used in practice, depending upon physical conditions and requirements.<br />
The same is true with the selection of transmission media, which can determine data center sustainability in terms<br />
of migration options (scalability). An overview of different transmission protocols and the media they support as<br />
well as their associated length restrictions can be found in section 3.8.<br />
Page 62 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Integrated Switching<br />
Blade servers are used in integrated switching. They consist of an housing with integrated plug-in cards for<br />
switches and servers. Cabling is usually reduced to FO backbone cabling, though this is not the case in the<br />
following configuration.<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
........<br />
Aggregation<br />
& Core<br />
SAN<br />
Application Server or Storage Library<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
Integrated switching generally includes I/O consolidation as well, which means that communication with the SAN<br />
environment is implemented using FCoE (Fibre Channel over Ethernet).<br />
Pod Systems<br />
Pod systems are a group (12 to 24) of self-contained rack systems. They are highly optimized and efficiently constructed<br />
with regard to power consumption, cooling performance and cabling, so as to enable rapid replication. In<br />
addition, each pod system can be monitored and evaluated.<br />
Because of their modularity, pod systems can be put into use regardless of the size of the data center. In organizations<br />
that require a higher data center capacity, units are replicated onto one another as necessary until the required<br />
IT performance is achieved. In smaller pod systems, lab and data center capacities can be expanded in a<br />
standard manner, depending upon business development. In this way, the organization can react flexibly to<br />
acquisitions or other business expansions, corporate consolidations or other developments dictated by the market.<br />
Any consolidated solution must consider the “single point of failure” problem. This means that systems and their<br />
paths of communication should be laid out in a redundantly.<br />
Uplinks<br />
All terminal devices in the network are connected to ports on switches (previously hubs) which act as distribution<br />
centers and forward the data traffic to the next higher network level in the hierarchy (also see section 3.3). This<br />
means that the forwarding system must support some multiple of the data rates of the separately connected<br />
devices, so that the connection does not become a “bottleneck”. These connections, so-called uplinks, must<br />
therefore be designed to be high-performance. For example, if all subscribers at a 24-port switch want to transfer<br />
data at the same time at an unrestricted rate of 1 Gbit/s, the uplink must make at least 24 Gbits/s available.<br />
However, since this represents a theoretical assumption, a 10 Gbit/s connection which can transfer 20 Gbit/s in<br />
full-duplex operation would be used in practice in this case.<br />
Uplink ports can also be used as media converters (copper > glass) and can exist as permanently installed interfaces<br />
or be modular (free slot), which increases flexibility and scalability, since a wide range of interfaces are<br />
offered by most manufacturers (also see section 3.6.4).<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 63 of 156
www.datacenter.rdm.com<br />
In contrast to user ports, the transmission and receiving line for the uplink port of a switch are interchanged. In<br />
modern devices, ports are equipped with functionality known as auto uplink (or Auto MDI-X). This allows them to<br />
recognize, on their own, whether it is necessary to exchange the transmission and receiving line, so separate<br />
uplink ports or crossover cables are unnecessary. Connecting switches in series by using uplinks is known as cascading.<br />
The following illustration demonstrates cascading from the Access to the Aggregation area. In the left<br />
image, these connections are established over permanently installed cables, and by means of patch cords over<br />
the cable duct in the right image.<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
Aggregation<br />
& Core<br />
Aggregation<br />
& Core<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Redunancy<br />
Uplink<br />
SAN Switch<br />
LAN Switch<br />
KVM Switch<br />
These variants can be a sensible option for short distances, especially since there are two fewer connections in<br />
between and at the same time patch panels can be saved.<br />
Arranging Patch Panels<br />
It is not a must that patch panels be installed horizontally, or installed in the rack in general. There are attractive<br />
alternatives to doing this, just from the viewpoint of the efficient use of the otherwise already scarce free spaces in<br />
the data center.<br />
Vertical mounting<br />
Floor box<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
PATCH<br />
PATCH<br />
PATCH<br />
Page 64 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
R&M provides space-spacing solutions for this purpose. So, for example, 3 patch panels can be installed in the<br />
Robust (CMS) cabinet type on both the front as well as rear 19” mounting plane, each on the left and right side of<br />
the 19” sections, as shown in the center image. By using high-density panels with 48 ports each, 288 connections<br />
can be built into in a rack of that type, in both the front and rear – and the full installation height is still available.<br />
Similarly 288 connections (copper or fiber optic) can be supported with R&M’s “Raised Floor Solution”, an underfloor<br />
distributor solution that can mount up to 6 19” patch panels (image to the right). The distributor platform is<br />
designed for raised floors (600x600 mm) and therefore allows the use of narrow racks (server racks) which in turn<br />
leads to space savings in the data center. Take care when installing these solutions that the supply of cooled air is<br />
not blocked.<br />
Cable Patches for Network Components<br />
In smaller data centers, distribution panels and active network components are often housed in the same rack, or<br />
racks are located directly next to one another.<br />
Interconnect<br />
Combined<br />
Crossconnect<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
PATCH<br />
SAN Switch<br />
SAN Connection<br />
LAN Connection<br />
KVM Connection<br />
Redunancy<br />
Uplink<br />
LAN Switch<br />
KVM Switch<br />
In the image on the left above, we see the components are separated into different racks which are connected to<br />
one another with patch cords. In the center example, all components are located in the same cabinet, which<br />
simplifies patching. In the illustration on the right, active network devices are separated once again, but are now<br />
routed into the second cabinet over installed cables to patch panels, a process that is solved most easily using<br />
pre-assembled cables. Patching is then performed on these patch panels.<br />
SAN<br />
These days, companies generally use Ethernet for TCP/IP networks and Fibre Channel for Storage Area Networks<br />
(SAN) in data centers. Storage Area Networks are implemented in companies and organizations that, in contrast<br />
to LANs, require access to block I/O for applications like booting, mail servers, file servers or large databases.<br />
However, this area is undergoing a radical change, and Fibre Channel over Ethernet (FCoE) and the iSCSI<br />
method are competing to take the place of Fibre Channel technology in the data center.<br />
Many networks in the data center are still based on a “classic” setup: Fibre Channel is used for the SAN, Ethernet<br />
for LAN (both as a server-interconnect and for server-client communication). Accordingly, both a network interface<br />
card (NIC), frequently a 10GbE NIC, as well as an appropriate FC host bus adapter (HBA) are also built into each<br />
server.<br />
Nevertheless, the direction of communication in the medium term is leading to a converged infrastructure, since<br />
the separation of LAN and SAN is merging more and more. Both economic as well as operational aspects are<br />
points in favor of this, one example being the standardization of the components in use. The convergence of FC to<br />
FCoE is described in section 3.7.2.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 65 of 156
www.datacenter.rdm.com<br />
3.5. <strong>Data</strong> <strong>Center</strong> Infrastructure<br />
An enormous savings potential in the area of energy demand and CO2 reduction can result from properly<br />
designing a data center and installing servers and racks. This especially applies for the layout of rack rows and<br />
conduction of heated and cooled air. Even small changes in arranging the alignment of supply and return air can<br />
lead to dramatic improvements and higher energy efficiency. This in turn assumes an “intelligent” room concept,<br />
which saves space yet at the same time allows for expansion, saves energy and is cost-effective.<br />
In addition to expandability, one must make sure, when equipping a data center, that the facility is easy to<br />
maintain. <strong>Data</strong> center accesses (doors) and structural engineering must be designed in such a way that large,<br />
heavy active components and machines can be installed, maintained and replaced. Access doors must therefore<br />
be at least 1 meter wide and 2.1 meters high, lockable and have no thresholds.<br />
The room must have a headroom of at least 2.6 meters from the floor to the lowest installations, and a floor<br />
loading capacity of more than 750 kg/m 2 (2.75 meters and 1,200 kg/m 2 is better still).<br />
Placing cabinets in rows makes for arrangements that are advantageous for cooling active components. One must<br />
make sure that aisles have sufficient space to allow for installing and dismantling active components. Since active<br />
components in recent years are having continuously larger installation depths, an aisle width of at least 1.2 meters<br />
must be provided. If aisles cannot be made equally wide, the planner should choose to make the aisle at the front<br />
of the cabinet row larger. If a raised floor is used, cabinet rows should be placed so that at least one side of the<br />
cabinets is flush with the grid of the base tiles, and that at least one row of base tiles per passageway can be<br />
opened.<br />
3.5.1. Power Supply, Shielding and Grounding<br />
Proper grounding is a key factor in any building. It protects people within the building from dangers that originate<br />
from electricity. Regulations for grounding systems in individual countries are established locally, and installation<br />
companies are generally well-known.<br />
With the introduction of 10 gigabit Ethernet, shielded cabling systems have been gaining in popularity around the<br />
world. However, since unshielded data cabling systems have been installed for the most part in most of the world<br />
up to this point, it is important to have exact knowledge about the grounding of shielded patch panels and the<br />
options available for this.<br />
Importance of Grounding<br />
Every building in which electrical devices are operated requires an appropriate grounding concept. Grounding and<br />
the ground connection affect safety, functionality and electromagnetic interference resistance (EMC: electromagnetic<br />
compatibility).<br />
By definition, electromagnetic compatibility is the ability of an electrical installation to operate in a satisfactory<br />
manner in its electromagnetic environment, without improperly affecting this environment in which other installations<br />
operate. The grounding system of buildings in which IT devices are operated must therefore satisfy the<br />
following requirements:<br />
• Safety from electrical hazards<br />
• Reliable signal reference within the entire installation<br />
• Satisfactory EMC behavior, so all electronic devices work together without trouble<br />
In additional to current local regulations, the following international standards provide valuable aid in this area: IEC<br />
60364-5-548, EN 50310, ITU-T K.31, EN 50174-2, EN 60950, TIA-607-A.<br />
Ideally, low- and high-frequency potential equalization for EMC is meshed as tightly as possible between all metal<br />
masses, housings (racks), machine and system components.<br />
Alternating Current Distribution System<br />
A TN-S system should always be used to separate the neutral conductor of the alternating current distribution<br />
system from the grounding network. TN-C systems for internal installations with PEN conductors (protective earth<br />
and neutral conductor combined) may not be used for IT devices.<br />
There are essentially two possible configurations for building grounding systems, described as follows.<br />
Page 66 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Grounding System as a Tree Structure<br />
A tree or star configuration was traditionally the<br />
preferred configuration for the grounding system in<br />
the telecommunications sector.<br />
In a tree structure, the individual grounding<br />
conductors are only connected with one another at<br />
a central grounding point. This method avoids<br />
ground loops and reduces the interference from<br />
low-frequency interference volages (humming).<br />
B<br />
uil<br />
di<br />
ng<br />
St<br />
ru<br />
ct<br />
ur<br />
al<br />
St<br />
Building Structural Steel<br />
Horizontal cable<br />
TO<br />
B<br />
uil<br />
di<br />
ng<br />
St<br />
ru<br />
ct<br />
ur<br />
al<br />
St<br />
Building Structural Steel<br />
TGBB<br />
TGBB: Telecommunications<br />
Ground Bonding Backbone<br />
MET: Main Earthing Terminal<br />
MET<br />
Note: power outlets connections shown are logical,<br />
not physical.<br />
Grounding System as a Mesh Structure<br />
The basic concept of a meshed system is not to<br />
avoid ground loops, but to minimize these loops to<br />
the greatest extent possible, and to distribute the<br />
currents flowing into them as evenly as possible.<br />
Meshed structures are almost always used today to<br />
ground high-frequency data transfer systems, since<br />
it is extremely difficult to achieve a proper tree<br />
structure in modern building environments. The<br />
building as a whole must have as many suitable<br />
grounding points as possible if this type of grounding<br />
is to be used. All metallic building components must<br />
always be connected to the grounding system using<br />
appropriate connection components. The conductive<br />
surfaces and cross-sections of these connection<br />
elements (e.g. metal strips and bars, bus<br />
connections, etc.) must be as large as possible so<br />
that they can draw off high-frequency currents with<br />
little resistance.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 67 of 156
www.datacenter.rdm.com<br />
In buildings in which no continuous meshed structure can be set up for the earthing system, the situation can be<br />
improved by setting up cells. A locally meshed grounding system of this type is achieved by using cable ducts<br />
made of metal, conductive raised floors or parallel copper conductors.<br />
Building Structural Steel<br />
Metallic conduit<br />
TO<br />
Building Structural Steel<br />
Note: power connections shown are logical, not physical.<br />
MET<br />
Grounding Options for Patch Panels<br />
All patch panels should be suitable for both tree as well as meshed grounding systems, so that customer options<br />
remain completely flexible. In addition to the grounding system in use (mesh or tree), selection of the method to be<br />
used is also based on whether or not the 19” cabinet or mounting rail is conductive.<br />
The earthing connection for patch panels may not be daisy chained from one patch panel to another, since this<br />
increases impedance. Every patch panel in the cabinet must be grounded individually as a tree structure to the<br />
cabinet or to ground busbars.<br />
Grounding and Ground Connection in the Cabinet<br />
Once the patch panel has been bonded to the cabinet, the cabinet itself must be bonded to ground. In a tree<br />
configuration it would connect to the telecommunication main ground busbars (TMGB), or telecommunication<br />
ground busbar (TGB).<br />
In a mesh configuration it would be connected to the nearest grounding point of the building structure. It is crucial<br />
in this ground connection that the impedance from the current path to the ground is as low as possible. The<br />
impedance (Z) is dependent on ohmic resistance (R) and line inductance (L):<br />
Ζ = ωL<br />
+ R<br />
ω = 2πf<br />
with<br />
This relationship should be considered when choosing the earthing strap to bond the cabinet to ground. Flat<br />
cables, woven bands or copper sheet metal strips exhibit low impedance characteristics. At a minimum, a stranded<br />
conductor with a diameter of 4-6 mm 2 should be considered (16 mm 2 is better).<br />
With regard to EMC, an HF-shielded housing with improved shielding effectiveness may be required in applications<br />
with high-frequency field-related influences. A definite statement regarding what housing version is required to<br />
satisfy specific standard limits can be made only after measurements have been taken.<br />
Page 68 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.5.2. Cooling, Hot and Cold Aisles<br />
Every kilowatt (kW) of electrical power that is used by IT devices is later released as heat. This heat must be<br />
drawn away from the device, cabinet and room so that operating temperatures are kept constant.<br />
Air conditioning systems, that operate in a variety of ways and have different levels of performance, are used to<br />
draw away heat. Providing air conditioning to IT systems is crucial for their availability and security. The increasing<br />
integration and packing densities of processors and computer/server systems causes a level of waste heat that<br />
was unimaginable in such a limited space only a few years ago.<br />
Various solutions for air conditioning exist on the market and are based on power and dissipation loss, in other<br />
words the waste heat from the IT components in use. With more than 130 W/cm² per CPU – this corresponds to<br />
two standard electric light bulbs per square centimeter – the job of “data center air conditioning” has been gaining<br />
in importance. This power density results in heat loads that are much greater than 1 kW per square meter.<br />
According to actual measurements and practical experience, dissipation loss of up to 8 kW can still prevail in a<br />
rack or a housing that uses a conventional air conditioning system that is implemented in a raised floor and<br />
realized through cooling air, as still exists in many data centers. However, the air flow system of the raised floor<br />
implemented in conventional mainframe data centers can no longer meet today’s in some instances extremely<br />
high requirements.<br />
Although a refrigerating capacity of 1 to 3 kW per 19” cabinet was sufficient for decades, current cooling capacities<br />
must be increased significantly per rack. Current devices installed in a 19” cabinet with 42 height units can take in<br />
over 30 kW of electrical power and therefore emit over 30 kW of heat. A further increase in cooling capacity can<br />
be expected as device performance continues to increase while device sizes become smaller.<br />
In order to improve the performance of existing air conditioning solutions that use raised floors, containment is<br />
currently provided for active components, which are arranged in accordance with the cold aisle/hot aisle principle.<br />
Cold and hot aisles are sometimes enclosed in order to enable higher heat emissions per rack.<br />
A cabling solution that is installed in a raised floor, and that is well-arranged and professionally optimized in accordance<br />
with space requirements, can likewise contribute to improved air conditioning.<br />
The decision criteria for an air conditioning solution include, among other things, maximum expected dissipation<br />
loss, operating costs, acquisition costs, installation conditions, expansion costs, guaranteed future and costs for<br />
downtimes and for physical safety.<br />
There are basically two common air conditioning types:<br />
• Closed-circuit air<br />
conditioning<br />
• Direct cooling<br />
Solution per the cold aisle/hot aisle<br />
principle – closed-circuit air<br />
conditioning (Source: BITKOM)<br />
Closed-Circuit Air Conditioning Operation<br />
In the past, room conditions for data center air conditioning were limited to room temperatures of 20° C to 25° C<br />
and relative humidity of 40% to 55% (RH). Now requirements for supply and exhaust air are also being addressed<br />
because of the cold aisle/hot aisle principle, since room conditions in the actual sense are no longer found in the<br />
entire space to be air conditioned.<br />
Conditions for supply air in the cold aisle should be between 18° C and 27° C, depending upon the appl ication –<br />
relative supply air humidity should be between 40% and 60% RH. Lower humidity leads to electrostatic charging,<br />
high humidity to corrosion of electrical and electronic components. Operating conditions with extremely cold<br />
temperatures of under 18° C and high humidity, whic h lead to formation of condensation on IT devices, must<br />
always be avoided. A maximum temperature fluctuation of 5° C per hour must also be taken into considera tion.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 69 of 156
www.datacenter.rdm.com<br />
Higher temperatures may be selected today than were previously, thanks to the improved manufacturer quality of<br />
silicon chips. Preset temperatures of 24° C to 27° C and relative humidities of about 60% are currently state of the<br />
art.<br />
In the closed-circuit air conditioning principle, the air cooled by the air conditioning system circulates to IT<br />
components, takes in heat and the warmed air then reaches the air conditioning system again in the form of return<br />
air that is to be re-cooled. Only a small amount of outside air is introduced into the room that is to be airconditioned<br />
and used for air exchange. Optimal conditions with respect to temperature and relative humidity can<br />
only be achieved with closed-circuit air-conditioning units, or so-called precision air-conditioning units. The energy<br />
used in these systems is better utilized, i.e. reducing the temperature of return air is the first priority.<br />
These units are contrasted with comfort air-conditioning units used for residential and office spaces, such as split<br />
air-conditioning units, which continuously use a large portion of the energy they consume to dehumidify recirculated<br />
air. This can lead not only to critical room conditions but also to significantly higher operating costs, which is why<br />
their use is not economically feasible in data centers.<br />
Rack arrangement and air flow are both crucial factors in the performance of closed-circuit air conditioning systems.<br />
This is why 19’’ cabinets, in particular, are currently arranged in accordance with the so-called hot aisle/cold aisle<br />
principle, so as to reproduce, as best as possible, the horizontal air flow required for IT and network components.<br />
In this arrangement, the air flow is forced to take in the heat from the active components on its path from the<br />
raised floor back to the air-conditioning. This process differentiates between self-contained duct systems (Plenum<br />
Feed, Plenum Return) and via room air (Room Feed, Room Return).<br />
One must make sure that the 19” cabinets<br />
correspond to the circulation principle that is<br />
selected. In “Room Feed, Room Return”,<br />
cabinet doors (if they exist) must be perforated<br />
at least 60% in order to permit air<br />
circulation. The floor of the cabinet should be<br />
sealed to the raised floor at least on the<br />
warm side, in order to avoid mixing air<br />
currents. A cooling principle of this type is<br />
suited for power up to approximately 10 kW<br />
per cabinet.<br />
Plenum Feed, Plenum Return (Source: Conteg)<br />
In “Plenum Feed, Plenum Return”, it is absolutely<br />
necessary that the cabinet has doors<br />
and that they can be sealed airtight. By<br />
contrast, the floor must be sealed on the<br />
warm side and be open on the cold side.<br />
These solutions are suitable for power of<br />
approximately 15 kW.<br />
Within the cabinet no openings may exist between the cold side and the warm side, to ensure that mixtures and<br />
microcirculations are avoided. Unused height units and feedthroughs should be closed with blind panels and seals.<br />
Performance can be further optimized by containment of the cold or warm aisle (Contained Cold/Warm Aisle).<br />
Of crucial importance is the height of the raised floor, over which the area of the data center is to be provided with<br />
cold air. Supply air comes out via slot plates or grids in the cold aisle. After it is warmed by the IT devices, the<br />
return air reaches the air conditioning system again to be re-cooled.<br />
Direct-Cooling Principle – Water-Cooled Server Rack<br />
Direct cooling of racks must be implemented when heat loads exceed 10 to 15 kW<br />
per cabinet. This is realized via a heat exchanger installed in the immediate vicinity<br />
of the servers. These are usually heat exchangers cooled with cold water, that are<br />
arranged either below or next to the 19” fixtures. Up to 40 kW of heat per rack can<br />
be drawn off in this way. A cold-water infrastructure must be provided in the rack<br />
area for this method.<br />
The water-cooled racks ensure the proper climatic conditions for their server cabinets,<br />
and are therefore self-sufficient with respect to the room air-conditioning system. In<br />
buildings with a low distance between floors, water-cooled server racks are a good<br />
option for drawing off high heat loads safely without having to use a raised floor. The<br />
high-performance cooling system required for higher power also includes a cold and<br />
hot aisle containment.<br />
Page 70 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The following illustrations show the power densities that can be realized with various air conditioning solutions.<br />
(Source: BITKOM)<br />
Air conditioning via raised floor without arranging<br />
racks for ventilation<br />
Air conditioning system with water-cooled housing<br />
for hot aisles<br />
Top view<br />
Air conditioning via raised floor with racks<br />
arranged in cold/hot aisles<br />
Air conditioning system with water-cooled housing<br />
for cold aisles<br />
Top view<br />
Air conditioning via raised floor with housing for<br />
cold aisles<br />
Air conditioning with water-cooled rack<br />
(sealed system)<br />
Air flow in the data center is essential for achieving optimal air conditioning. The question is whether hot or cold<br />
aisles make sense economically and are energy-efficient.<br />
In a cold aisle system, cold air flows through the raised floor. The aisle between server cabinets is housed to be<br />
airtight. Since the floor is only opened up in this encapsulated area, cold air only flows in here. <strong>Data</strong> center<br />
operators therefore often provide this inlet with higher pressure. Nevertheless, the excess pressure should be<br />
moderate, since the fans in the server regulate air volume automatically. These are fine points, but they contribute<br />
greatly to optimizing the PUE value.<br />
Computers blow waste heat into the remaining space. It rises upwards and forms a cushion of warm air under the<br />
ceiling. The closed-circuit cooling units suck out the air again, cool it and draw it back over the raised floor into the<br />
cold aisle between the racks. This way warm and cold air does not mix together and unnecessary losses or higher<br />
cooling requirements are avoided.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 71 of 156
www.datacenter.rdm.com<br />
The server cabinets in the hot aisle containment on the other side are located in two rows opposite one another,<br />
with their back sides to one another. Air flows through computers from the outside to the inside via an air duct, so<br />
waste heat is then concentrated in the hot aisle between the rows of racks. The raised floor between them is<br />
sealed tight. The warm air is then fed directly into the closed-circuit cooling units. These use a heat exchanger to<br />
cool the warmed air down to the specified supply air temperature, and then blow it into the outer room. The<br />
difference in temperature between supply air and exhaust air can be monitored and controlled better here than in<br />
the cold aisle process. On the other hand, supply air temperature and flow velocity is easier to regulate in the cold<br />
aisle process, because of the smaller volume.<br />
Which advantages and disadvantages prevail in the end must be examined in the overall consideration of the data<br />
center.<br />
No matter what approach is selected for waste heat management, one must always make sure that air circulation<br />
is not obstructed or even entirely blocked when setting up horizontal cabling (in the raised floor or under the<br />
ceiling) and in vertical cable routing (in the 19” cabinet when connecting to active components). In this way,<br />
cabling can also contribute to the energy efficiency of the data center.<br />
3.5.3. Raised Floors and Dropped Ceilings<br />
The raised floor, ideally 80 centimeters in height, also has a great influence in the area of hot and cold aisles.<br />
Among its other functions, the floor is used to house cabling. Since the floor is also the air duct, cables must not<br />
run across the direction of the current and thus block the path of air flow. One must therefore think about running<br />
the cabling in the ceiling and only using the raised floor for cooling, or at least reducing it to the area of the hot<br />
aisle.<br />
Depending upon building conditions, possible solutions for data center cable routing include the raised floor, cable<br />
trays suspended from the ceiling or a combination of these two solutions.<br />
The supply of cold air to server cabinets may not be restricted when cables are routed through the raised floor. It<br />
is recommended that copper and fiber optic cables be laid in separate cable trays. <strong>Data</strong> and power supply cables<br />
must be separated in accordance with EN 50174-2 standards. The cable trays at the bottom need to remain<br />
accessible when multiple cable routes are arranged on top of one another.<br />
Cold Kaltgang aisle<br />
Warmgang Hot aisle<br />
It is recommended that<br />
racks be connected with<br />
cabling along the aisles.<br />
<strong>Data</strong> Datenkabel cables<br />
Cold Kaltluft air Korridor corridor<br />
Power supply<br />
Stromversorgung<br />
<strong>Data</strong> Datenkabel cables<br />
Perforated<br />
Gelochte<br />
base Bodenplatten plates<br />
Power supply<br />
Stromversorgung<br />
By so doing, the smaller<br />
required volume of<br />
power supply cables are<br />
routed under the cold<br />
aisle directly on the<br />
floorbase, and the more<br />
voluminous data cables<br />
are routed under the hot<br />
aisle in the cable trays<br />
mounted on the raised<br />
floor supports.<br />
If air-conditioning units in this arrangement are also arranged within the aisle extension, corridors result under the<br />
raised floor and can route cold air to server cabinets without signficant obstacles.<br />
If there is no raised floor in the data center or if the raised floor should remain clear for the cooling system, the<br />
cabling must be laid in cable trays suspended under the ceiling. One must also make sure that the distances<br />
between power and data cables as prescribed in EN 50174-2 are observed.<br />
Arranging cable routes vertically on top of one another directly above the rows of cabinets is one possible solution.<br />
In this case, the cable routing may not obstruct the lighting provided for server cabinets, cover security sensors, or<br />
obstruct any existing fire extinguishing systems.<br />
The use of metal cable routing systems for power cables and copper data cables improves EMC protection. Cable<br />
routing systems made of plastic, with arcs and cable inlets that delimit the radii, like R&M’s Raceway System, are<br />
ideal for fiber optic cabling.<br />
Page 72 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
A maximum filling level of 40% should be estimated when dimensioning the cable tray. The use of low diameter<br />
cables can be important in this regard. These significantly reduce the volume required in the cross-section of the<br />
cable routing system, as well as weight. The AWG26 installation cable (Ø: 0,40 mm) from R&M is one example of<br />
this cable type; it complies with all defined cabling standards up to a maximum cable length of 55 m, and only<br />
takes up about 60% of the volume of a standard AWG22 cable (Ø: 0,64 mm).<br />
Mixed cable routing systems<br />
are increasingly gaining in<br />
popularity, when basic data<br />
center conditions permit this<br />
method to be used. <strong>Data</strong><br />
cables in this setup are<br />
routed in a cable routing<br />
system under the ceiling,<br />
and the power supply routed<br />
in the raised floor.<br />
Max. filling Füllgrad level 40% 40%<br />
Metallene cabling Kabelführung routing<br />
für for Kupfer-Verkabelung<br />
copper cabling<br />
Kabelführungen Plastic cabling routing<br />
aus<br />
Kunststoff for fiber optic<br />
für FO<br />
This ensures the necessary<br />
distance between data<br />
cabling and the power<br />
supply in an elegant way<br />
that does not require<br />
extensive planning, and<br />
cooling capacity is not<br />
impaired. All other<br />
requirements already mentioned<br />
must be fulfilled here<br />
as well.<br />
2,6 2.6 m min. min.<br />
Metallene cabling Kabelführungen routing<br />
für for Stromversorgung<br />
power supply<br />
Mixed cable routing under ceiling and in hollow floor<br />
3.5.4. Cable Runs and Routing<br />
The cable routing system in the data center must fulfill the following demands.<br />
• It must satisfy cabling requirements so no losses in performance occur<br />
• It must not obstruct the cooling of active components<br />
• If must fulfil requirements of electromagnetic compatibility (EMC)<br />
• It must support maintenance, changes and upgrades<br />
Depending upon building conditions, possible solutions for cable routing in the data center are the raised floor,<br />
cable trays suspended from the ceiling or a combination of these two solutions, as already described in the preceding<br />
section.<br />
The cable routing in many “historically developed” data centers and server rooms may be described as catastrophic.<br />
This is based on a lack of two things:<br />
• In principle, installation of a “structured cabling” system in the data center is an “modern” concept. For a<br />
long time, server connections were created using patch cords routed spontaneously in the raised floor<br />
between the servers and switches. An infrastructure of cable routing systems did or does not exist in the<br />
raised floor, and a “criss-cross cabling” is the result.<br />
• In many cases, the patch cords that were laid were documented only poorly or not at all, so it was impossible<br />
to remove connecting cords that were no longer needed or defective.<br />
The result is very often raised floors that are completely overfilled, though cable routing is in principle only an<br />
additional function of the raised floor in the data center. In most data centers these raised floors are also required<br />
for server cabinet ventilation/cooling. An air flow system in the raised floor with as few obstructions as possible is<br />
rising in importance, if only in view of the increasingly greater cooling capacities required for systems. Any kind of<br />
chaotic cabling interferes with this. Considering that cabling systems are constantly changing, it is very difficult for<br />
air conditioning technicians to dimension the air flow. Avoiding doing this is a point in favor of cabling technologies<br />
like glass fibers or “multi-core cables” which save on cable volume, but also force one to use cable routing<br />
systems like trays or grids. These systems ensure that cable routing is orderly, they prevent the proliferation of<br />
cables, and also protect cables that were laid.<br />
What system provides which advantages<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 73 of 156
www.datacenter.rdm.com<br />
Tray<br />
The advantage of a cable tray is that cables are better protected mechanically. According to the results of a test<br />
carried out by the laboratories of Labor, AEMC Mesure and CETIM, a tray does not essentially provide better<br />
electromagnetic protection than a mesh cable tray. Both produce the same “Faraday cage” effect (source:<br />
www.cablofil.at). A key disadvantage of trays that must be mentioned is that only trays should be used which<br />
prevent cable damage at the cable entry and exit points using appropriate edge protection In the case where an<br />
exit point is added later, it is very difficult to implement the edge protection and thus the cable could be damaged.<br />
Providing an additional cover reduces the danger that “external cables”, such as power cables for postconstruction<br />
cabling, may be laid on data cables and impair their operation.<br />
Mesh Cable Tray<br />
A mesh cable tray does not provide the same mechanical protection as the regular cable tray described above.<br />
The cable support points on grid bars may mechanically damage cables at the bottom of the tray, in the case of<br />
higher cable densities. This risk can be definitively reduced if a metal plate, which prevents cable point pressure,<br />
is placed on the base of the mesh cable tray. The advantage of the “open” mesh cable tray lies in its improved<br />
ability for cables to be routed out. The option of using an additional cover is an exceptional case when using mesh<br />
cable trays, but is a possibility with certain manufacturers.<br />
Laying Patch Cords<br />
If appropriate solutions exist for laying installation cables, the question remains: where should patch cords be put,<br />
if not in the raised floor<br />
This concept, and in turn the solution for the problem, is again made clear by comparing the distribution elements<br />
found in EN 50173-1 and EN 50173-3: In principle, the Area Distributor (HC: horizontal cross connect) assumes<br />
the function of the Floor Distributor, and the Device Connection (EO: Equipment Outlet, in the Distribution Switchboard)<br />
assumes the function of the telecommunication outlet (accordingly in one room). This telecommunication<br />
outlet is located in a cabinet that is comparable in principle to an office room. No one would think of essentially<br />
supplying another room by means of a long patch cord from an outlet in an office. Similarly, in in the data center<br />
only one element (e.g. a server) should be supplied from a device connection in the same cabinet. If this principle<br />
is followed, there will be no, or at least significantly less, patch cords routed from one cabinet to another, sparing<br />
the raised floor. Of course, patch patching from one cabinet to another cannot be excluded completely. Nevertheless,<br />
this should be implemented outside of the raised floor. Three technical variants for this are worth<br />
mentioning:<br />
• Implementing trays or mesh cable trays above cabinets. The advantages and disadvantages of both<br />
systems were already described above.<br />
• The clearly more elegant alternative to metal trays or mesh cable<br />
trays are special cable routing tray systems, such as the<br />
Raceway System from R&M. This system, with a distinctive<br />
yellow color, is likewise is mounted on top of the cabinet. It<br />
provides a wide variety of shaped pieces (e.g. T pieces, crosses,<br />
exit/entry pieces, etc.) which fit together in such a way as to<br />
ensure that cable bending radius are maintained, an important<br />
factor for fiber optic cables.<br />
• If no option is available for installing additional cable routing<br />
systems above the row of cabinets, or more space in the cabinet<br />
is needed for active components, the “underground” variant is<br />
one possible solution. In this case patch panels are mounted in<br />
the raised floor in a box, e.g. the Raised Floor Solution from R&M,<br />
where up to 288 connections can be realized. A floor box is<br />
placed in front of each cabinet and the patch cords are then<br />
routed directly from it into the appropriate cabinet to the active<br />
components.<br />
The same savings in rack space can achieved with R&M’s Robust Cabinet (CMS), where patch panels<br />
are installed vertically and at the sides. Here as well, patch cords remain within one rack.<br />
These solutions were already shown in a cabling architecture context, in section 3.4.5 “Other Variants”.<br />
Clear cable patching makes IMAC (Installation, Moves, Adds, Changes) work in the data center significantly<br />
easier.<br />
Page 74 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.5.5. Basic Protection and Security<br />
Ensuring data security – e.g. with backups and high-availability solutions – is not the only challenge that faces<br />
data center administrators. They also must make sure that the data center itself and the devices in it are protected<br />
from environmental influences, sabotage, theft, accidents, and similar incidents.<br />
Many of the following items were already mentioned in detail under the topic of “Security Aspects” in section 1.12.<br />
An overview of essential data center dangers and corresponding countermeasures is again provided here.<br />
In general, it is true that security in a data center is always relative. It depends, for one, on the potential for danger,<br />
and also on any security measures that were already taken. A high level of security can be achieved absolutely by<br />
means of extremely high safeguards, even in an extremely dangerous environment. However, protecting a data<br />
center that is located in earthquake areas, military IT installations under threat or under similar situations does<br />
have its price.<br />
It therefore makes sense for those responsible for IT in the data center to assess the potential of danger before<br />
beginning their security projects, and to relate this to the desired level of security. They can then work out the<br />
measures that are required for realizing this level of security.<br />
Threat Scenarios<br />
The first step in safeguarding a data center against dangers always consists of correctly assessing actual threats.<br />
These are different for every installation. Installations in a military zone have requirements that are different than<br />
those of charitable organizations, and data centers in especially hot regions must place a higher emphasis on air<br />
conditioning their server rooms than IT companies in Scandinavia. The IT departments concerned should therefore<br />
always ask themselves the following questions:<br />
• How often does the danger in question occur<br />
• What consequences will it have<br />
• Are these consequences justifiable economically<br />
But what elements make up the threats that<br />
typically endanger IT infrastructures and provide<br />
the basis for the questions listed above<br />
We must now list possible catastrophic losses<br />
in connection with this.<br />
Floods and hurricanes have been occurring with<br />
disproportionate frequency in central Europe in<br />
recent years, and, if an IT department assumes<br />
that this trend is continuing, it must absolutely<br />
make sure its data center is well-secured<br />
against storms, lightning and high water.<br />
Losses from vandalism, theft and sabotage<br />
have achieved a similar level of significance.<br />
High-performance building security systems<br />
are used so that IT infrastructures can be protected<br />
as well as possible. Finally, the area of<br />
technical disturbances also plays an important<br />
role in security.<br />
Picture supplied by Lucerne town hall<br />
Catastrophic Losses<br />
Our definition of the term “catastrophe” is very liberal, and we assign everything to this area that is related to<br />
damages caused by fire, lightning and water, as well as other damage caused by storms.<br />
The quality of the data center building plays an especially big part in resisting storms. The roof should be robust<br />
enough so tiles do not fly off during the first hurricane and rain water is not able to penetrate unhindered; the cellar<br />
must be sealed and dry, and doors and windows should have the ability to withstand the storm, even at high wind<br />
forces.<br />
At first glance, these points appear to be obvious, and also apply to all serious residential buildings – in fact,<br />
practically all buildings in central Europe fulfill these basic requirements. Nevertheless, it often makes perfect<br />
sense, for temporary offsite installations and when erecting IT facilities abroad, to take these basic requirements<br />
into consideration.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 75 of 156
www.datacenter.rdm.com<br />
Fire and Lightning<br />
Quite a few security measures that are not at all obvious can be taken<br />
when it comes to fire protection. An obvious first step that makes sense is<br />
to implement the required precautions in the rooms or buildings to be<br />
protected. This includes smoke and heat detectors, flame sensors,<br />
emergency buttons as well as surveillance cameras that sound alarms<br />
when fire erupts, or ensure that security officers become aware of this<br />
situation faster.<br />
In addition, one must ensure that fire department reaction time is as short<br />
as possible. Companies with a fire department on site have a clear<br />
advantage here – but it can also be of advantage to carry out regular fire<br />
drills that include the local fire department.<br />
As lightning is one of the most common causes of fires, a lightning rod is mandatory in every data center building.<br />
In addition, responsible employees should provide all IT components with surge protection, to avoid the damage<br />
that can occur from distant lightning.<br />
Structural measures can also be a help: When constructing a data center, some architects go so far as to use<br />
concrete that contains metal, so a Faraday cage is produced. This ensures that the IT infrastructure is well<br />
protected against lightning damage.<br />
Other Causes of Fires<br />
Lightning is not the only cause of fires. Other causes are also associated with electricity; furthermore, overheating<br />
and explosions play an important role in connection with fire. Among other causes, fire that is started by electricity<br />
can erupt when the current strength or voltage of a power connection is too high for the device that is running on it.<br />
Defective cable insulations also represent a danger, namely they provide sparks that can accidentally ignite<br />
inflammable materials that are located nearby. The parties responsible should therefore make sure that not only is<br />
the cable insulation in order, but also that as few flammable materials as possible exist in the data center. This tip<br />
may sound excessive, but in practice it is often true that piles of paper and packaging that are lying around,<br />
overflowing wastepaper baskets and dust bunnies and similar garbage make fires possible. Regular data center<br />
cleaning therefore fulfills not only a health need, but also a safety need. In connection with this, IT employees<br />
should also make sure that smoking is prohibited in data center rooms, and that a sufficient number of fire<br />
extinguishers are available.<br />
Other prevention methods can also be implemented. These include fire doors, and cabling that is especially<br />
secure. With fire doors, it is often enough to implement measures that stop the generation of smoke, since smoke<br />
itself can have a devastating effect on IT environments. However, in environments that are at an especially higher<br />
risk of fire, installing “proper” fire doors that are able to contain a fire for a certain amount of time will definitely<br />
provide additional benefits.<br />
With regard to cabling, flame-retardant cables can be laid, and so-called fire barriers can be installed that secure<br />
the cable environment from fires. These ensure that cable fires do not get oxygen and also prevent the fire from<br />
quickly spreading to other areas.<br />
Fire Fighting<br />
Once a fire occurs, the data center infrastructure should already have the means to actively fight it. Sprinkler<br />
systems are especially important in this area. These are relatively inexpensive and are easily maintained, but are<br />
a problem in that the water can cause tremendous damage to IT components. Apart from that, sprinkler systems<br />
provide only a few advantages when it comes to fighting concealed fires, like those in server cabinets or cable<br />
shafts.<br />
CO2 fire extinguishing systems represent a sensible alternative to sprinkler systems in many environments. These<br />
operate on the principle of smothering burning fires, and therefore do not require water. However, they also bring<br />
along disadvantages: For one, they represent a great danger to any individuals who are still in the room, and for<br />
another, they have no cooling effect, so they are unable to contain damage caused by the generation of heat.<br />
A further fire-fighting method is the use of rare gases. Fire extinguishing systems based on rare gases (like argon)<br />
also reduce the oxygen content in air and thus suffocate flames. However, they are less dangerous to humans<br />
than CO2, so individuals who are in the burning room are not in mortal danger. In addition, argon does not cause<br />
damage to IT products. Drawback: These systems are rather expensive.<br />
Page 76 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Water Damage<br />
Among other causes, water damage in the data center results not only from the extinguishing water used in<br />
fighting fires, but also from pipe breaks, floods and building damage. A certain amount of water damage can<br />
therefore be prevented by avoiding the use of a sprinkler system.<br />
Nevertheless, the parties responsible for IT in the data center should also consider devices that protect against<br />
water damage. First on the list are humidity sensors that monitor a room and sound an alarm when humidity<br />
appears – on the floor, for example. Surveillance cameras can also be used for this same purpose; they keep<br />
security officers informed at all times of the state of individual rooms, and thus shorten reaction time in case of<br />
damage.<br />
In many cases, another sensible option is to place server cabinets and similarly delicate components in rooms in<br />
such a way that they do not suffer damage when small amounts of water set in. Ideally, those on watch, namely<br />
those responsible for the main tap of the defective pipe, will have already turned it off before the “great waters”<br />
come.<br />
Technical Malfunctions<br />
Technical malfunctions have a great effect on the availability of the applications that are operated in the data<br />
center. Power failures rank among the most common technical malfunctions, and can be prevented or have their<br />
effects reduced through the use of UPSs, emergency generators and redundant power supply systems.<br />
In many cases, electromagnetic fields can affect the data transfer. This problem primarily occurs when power and<br />
data cables are laid together in one cable tray. However, other factors like mechanical stresses and high<br />
temperatures also have a great effect on the functioning of cable connections.<br />
Most difficulties that arise in this area can therefore be easily avoided: It is enough to house power and data<br />
cables in physically separate locations, to use shielded data cables, prevent direct pressure on the cable and<br />
provide for sufficient cooling. The parties responsible for this latter measure must make sure that data center air<br />
conditioning systems do not just cool the devices that generate heat in server rooms, but also cool cable paths<br />
exposed to heat – for example, cables under the roof.<br />
Surveillance Systems<br />
Most of the points covered in the section on catastrophes, such as threats resulting<br />
from fire and water, only cause significant damage when they have the opportunity to<br />
affect IT infrastructures over a certain period of time.<br />
It is therefore especially important to constantly monitor server cabinets, computer<br />
rooms and entire buildings, so that reaction times are extremely short in the event of<br />
damage.<br />
Automatic surveillance systems provide great benefits in this area; they keep an eye on data center parameters<br />
like temperature, humidity, power supply, dew point and smoke development, and at the same time can automatically<br />
initiate countermeasures (like increasing fan speed when a temperature limit value is exceeded) or send<br />
warning messages. Ideally, systems like these do not operate from just one central point. Instead, their sensors<br />
should be distributed over the entire infrastructure and their results should be reported via IP to a console that is<br />
accessible to all IT employees. They therefore use the same cabling as data center IT components to transfer<br />
their data, which makes it unnecessary to lay new cables. In addition, they may be scalable in any fashion, so they<br />
can grow along with the requirements of the data center infrastructure.<br />
Some high-performance products have the ability to integrate IP-based microphones and video cameras so they<br />
can transmit a real-time image and audio recording of the environment to be protected. Many of these products<br />
even have motion detectors, door sensors and card readers, as well as the ability to control door locks remotely.<br />
If a company’s budget for a data center surveillance solution is not sufficient, the company can still consider<br />
purchasing and installing individual sensors and cameras. However, proceeding in this fashion only makes sense<br />
in smaller environments, since many sensors – for example, most fire sensors – cannot be managed centrally,<br />
and therefore must be installed at one location, in which an employee is able to hear at all times if an alarm has<br />
gone off.<br />
With cameras, especially in consideration of current price trends, it always makes sense to use IP-based solutions<br />
to avoid having to use dedicated video cables.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 77 of 156
www.datacenter.rdm.com<br />
Vandalism, Theft, Burglary and Sabotage<br />
Security products which control access to the data center building<br />
play a primary role in preventing burglaries, sabotage and theft.<br />
These products help to ensure that “villians” do not have any<br />
access whatsoever to the data center infrastructure. Modern<br />
access control systems manage individual employee authorizations,<br />
usually through a central server, and make it possible for<br />
employees to identify themselves by means of card readers,<br />
keypads, iris scanners, fingerprints and similar means.<br />
If data center management implements an ingenious system that<br />
only opens up the routes that employees need to perform their<br />
daily work, they automatically minimize the risk of misuse.<br />
However, this does not rule out professional evildoers from the<br />
outside attempting to enter the data center complex by force.<br />
Access control systems should therefore, if possible, be combined<br />
with camera surveillance systems, security guards and alarm<br />
systems, since these contribute greatly to unauthorized individuals<br />
not entering the building at all.<br />
As we saw in the preceding section, data center surveillance systems with door lock controls and door sensors<br />
that trigger alarms automatically (for example, between 8:00 in the evening and 6:00 in the morning) when room<br />
or cabinet doors are opened supplement the products mentioned above, increasing the overall level of security.<br />
“Vandalism sensors” measure vibrations and also help to record concrete data that document senseless<br />
destruction. The audio recordings already mentioned can also be of use in this area.<br />
Conclusion<br />
The possibilities for influencing the security of a data center security are by no means limited to the planning<br />
phase. Many improvements can still be implemented at a later point in time.<br />
The implementation of rare gas fire extinguishing systems and high-performance systems for data center surveillance<br />
and access control were just two examples given in this regard. It is therefore absolutely essential that<br />
those responsible for the data center continue, at regular intervals, to devote time to this topic and analyze the<br />
changes that have come to light in their environment, so they can then update their security concept to the<br />
modified requirements.<br />
3.6. Active Components / Network Hardware<br />
The purpose of this section is to give an overview of the active components in an ICT environment, as well as their<br />
functions. These include, on the one hand, terminal devices like servers, PCs, printers, etc. as well as network<br />
devices like switches, routers and firewalls, in which each of these components also contains software. One can<br />
also differentiate between passive hardware such as cabling, and active hardware such as the devices listed<br />
above.<br />
Active network components are electronic systems<br />
that are required to amplify, convert, identify, control<br />
and forward streams of data. Network devices are a<br />
prerequisite for computer workstations being able to<br />
communicate with one another, so they can send,<br />
receive and understand data.<br />
The network here consists of independent computers<br />
that are linked, but separate from one another, and<br />
which access common resources. In general, a<br />
distinction is made in the area of networks between<br />
LAN (local area network / private networks) and<br />
WAN (wide area network / public networks), MAN<br />
(metropolitan area network / city-wide networks) and<br />
SAN (storage area network / storage networks).<br />
Page 78 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.6.1. Introduction to Active Components<br />
Small teams that want to exchange files with one another through their workstations have needs that are different<br />
than those of large organizations, in which thousands of users must access specific databases. It is for that<br />
reason that a distinction is made between so-called client/server networks, in which central host computers<br />
(servers) are made available, and peer-to-peer networks, in which individual computers (clients) have equal rights<br />
to release and use resources.<br />
• In general, a Client/Server network differentiates between two types of computers that make up the<br />
network: The server, as host, is a computer which provides resources and functions to individual users<br />
from a central location; the client, as customer, takes advantage of these services. The services provided<br />
by servers are quite diverse: They range from simple file servers which distribute files over the network or<br />
release disk space for other uses, to printer servers, mail servers and other communication servers, up to<br />
specialized services like database or application servers.<br />
• As its name implies, a Peer-to-Peer Network consists principally of workstations with equal rights. Every<br />
user is able to release the resources from his/her own computer to other users in the network. This<br />
means that all computers in the network, to a certain extent, provide server services. In turn, this also<br />
means that data cannot be stored in one central location.<br />
In practice, however, hybrids of these network types are encountered more frequently than pure client/server- or<br />
absolute peer-to-peer networks. For example, one could imagine the following situation in a company:<br />
Tasks like direct communication (e-mail), Internet access (through a proxy server or a router) or solutions for data<br />
backup are made available by means of central servers; and by contrast, access to files within departments or to<br />
colleague’s printer within an office are controlled through peer-to-peer processes that bypass servers.<br />
In a stricter sense, it is actually special software components, rather than specific computers, that are designated<br />
as “clients” or “servers”.<br />
This also results in differences in active network components for small and large organizations. Functions that are<br />
provided must satisfy different requirements, whether these concern security (downtime, access, monitoring),<br />
performance (modular expandability), connection technology (supported protocols and media) and other specific<br />
functions that are desired or required.<br />
3.6.2. IT Infrastructure Basics (Server and Storage Systems)<br />
Active components, if they are network-compatible, have LAN interfaces of various types. These are specified by<br />
the technology used for the transmission (transmission protocol, e.g. Ethernet), speed (100 Mbit/s, 1/10 Gbit/s),<br />
media support (MM/SM, copper/fiber) and interface for the physical connection (RJ45, LC, etc.).<br />
There are two types of latencies (delay) in the transmission of data for network-compatible devices:<br />
• Transfer delay (in µs/ns, based upon medium and distance)<br />
• Switching delay (in ms/µs, based upon device function, switching/routing)<br />
One must therefore pay special attention to the latency of active components to ensure that the delay in the data<br />
communication is as low as possible. This does not imply, however, that cabling quality does not have an effect on<br />
data transmission.<br />
On the contrary: Other factors like crosstalk, attenuation, RL, etc. play an important role in a clean signal<br />
transmission. In addition, distances between signals are becoming smaller as a result of continually growing<br />
transmission rates and other factors, while signal levels are becoming smaller as a result of enhanced modulation<br />
types. Therefore, transmissions are becoming more susceptible to interference, i.e. transmissions are becoming<br />
more sensitive to outside influences. In addition, performance reserves which can stand up to future data rates are<br />
in demand more and more for cabling components.<br />
In the final analysis, the typical investment period for cabling extends to over 10 years, as compared to 2 to 3<br />
years for active components. Wanting to save costs in data center cabling would therefore certainly be the wrong<br />
investment approach, especially since the percentage of this investment to total investments for a data center is<br />
very small (5 % to 7 %) In addition, having to replace cabling is tied to enormous expenditure and, in the case of<br />
switches for example, leads to a minor interruption in the data center if redundancy does not exist.<br />
Decision makers must be aware here that the cabling structure in the data center, along with all of its components,<br />
represents the basis of all forms of communication (e-mail, VOIP, etc.) – for a long period of time!<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 79 of 156
www.datacenter.rdm.com<br />
Server (Single Server)<br />
This term denotes either a software component (program), in the context of the client/server model, or a hardware<br />
component (computer) on which this software (program) runs, in the context of this concept. The term “host” is<br />
also used in technical terminology for the server hardware component. So whether a server is meant to denote a<br />
host or a software component can only be known from context, or from background information.<br />
Server as software:<br />
A server is a program which provides a service. In the context of the client/server model, another program, the<br />
client, can use this service. Clients and servers can run as programs on different computers or on the same<br />
computer.<br />
In general, the concept can be extended to mean a group of servers which provide a group of services. Examples<br />
are mail servers, (extended) web servers, application servers and database servers. (also see section 1.2)<br />
Server as hardware:<br />
The term server for hardware is used…<br />
• …as a term for a computer on which a server program or a group of server programs run and provide<br />
basic services, as already described above.<br />
• …as a term for a computer whose hardware is adapted specifically to server applications, partly through<br />
specific performance features (e.g. higher I/O throughput, higher RAM, numerous CPUs, high reliability,<br />
but minimal graphics performance).<br />
Server Farm<br />
A server farm is a group of networked server hosts that are all of the same kind and connected to one logical<br />
system. It optimizes internal processes by distributing the load over the individual servers, and speeds up<br />
computer processes by taking advantage of the computing power of multiple servers. A server farm uses appropriate<br />
software for the load distribution.<br />
Virtual Server<br />
If the performance provided by a single host is not enough to manage the tasks of a server, several hosts can be<br />
interconnected into one group, also called a computer cluster. This is done by installing a software component on<br />
all hosts, which causes this cluster to appear as a single server to its clients. Which host is actually executing<br />
which part of a user’s task remains hidden to that user, who is connected to the server through his/her client. This<br />
server is thus a distributed system.<br />
The reverse situation also exists, in which multiple software servers are installed onto one high-performance host.<br />
In this case, it remains hidden to users that the different services are in reality being handled by only a single host.<br />
Examples of well-known providers of virtualization solutions include VMware, Citrix and Microsoft.<br />
Rack Server<br />
Rack servers combine high performance in a small amount of<br />
space. They can be employed in a very flexible manner, and<br />
are therefore the first choice for constructing IT infrastructures<br />
that are scalable and universal. In June of 2010, Tec-Channel<br />
created a list of the most popular rack servers from its<br />
extensive, detailed database of products:<br />
• HP ProLiant DL380<br />
• IBM System x3950<br />
• Dell PowerEdge R710<br />
• HP ProLiant DL370 G6<br />
• HP ProLiant DL785<br />
• Dell PowerEdge 2970<br />
• Dell PowerEdge R905<br />
• Dell PowerEdge R805<br />
• Dell PowerEdge R610<br />
• HP ProLiant DL580 Insides of a HP ProLiant DL380<br />
Page 80 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
As an illustration, the key performance features of the first three models are listed below:<br />
HP ProLiant DL380 IBM System x3950 Dell PowerEdge R710<br />
Construction 2U rack 4U rack 2U rack<br />
Main memory (max.) 144 GByte 64 GByte 144 GByte<br />
Processor<br />
Intel Xeon 5500 series;<br />
Intel Xeon 5600 series<br />
Intel Xeon 7400<br />
Storage (max.) 16 TByte 2.35 TByte 6 TByte<br />
Intel Xeon 5600 series;<br />
Intel Xeon 5500 series<br />
Network ports 4 x Gbit Ethernet 2 x Gbit Ethernet 2 x 10 Gbit Ethernet<br />
Power supply<br />
Pizza Box<br />
2 x 750 watt power<br />
supplies<br />
2 x 1,440 watt power<br />
supplies<br />
2 x 570 watt power<br />
supplies<br />
The term pizza box, when used in a server context, is generally a slang term for server housing types with 19-inch<br />
technology and a single height unit.<br />
Blade Server<br />
Blade systems (blade, blade server, blade center) are one of the most modern server designs and are the fastest<br />
growing segment of the server market.<br />
However, they do not differ from traditional rack servers in terms of operation and the execution of applications.<br />
This makes it relatively easy to use blades for software systems that already exist. The most important selection<br />
criteria for blade servers are the type of application expected to run on the server and the expected workload. In<br />
consideration of their maintainability, provisioning and monitoring, blades on the whole deliver more today than<br />
their 19-inch predecessors, yet are economical when it comes to energy and cooling. For example, a Blade<br />
<strong>Center</strong>, an IBM term, provides the infrastructure required by the blades connected inside it. In addition to the<br />
power supply, this includes optical drives, network switches, Fibre Channel switches (for the storage connection)<br />
as well as other components.<br />
The advantage of blades lies in their compact design, high power density, scalability and flexibility, a cabling<br />
system that is more straightforward with significantly lower cable expenditure, and quick and easy maintenance. In<br />
addition, only a single keyboard-video-mouse controller (KVM) is required for the rack system.<br />
A flexible system management solution always pays off, especially in the area of server virtualization. Since in this<br />
situation multiple virtual servers are usually being executed on one computer, a server also requires multiple<br />
connections to the network. Otherwise, one must resort to costly processes for address conversion, similar to<br />
what NAT (Network Address Translation) does. In addition, the level of security is increased by separating<br />
networks. Many manufacturers allow up to 24 network connections to be provided for one physical server for this<br />
purpose, without administrators having to change the existing network infrastructure. This simplifies integration of<br />
the blade system into the existing infrastructure.<br />
TecChannel created the following list of the most popular<br />
blade servers from its extensive, detailed product database:<br />
• 1st place: IBM Blade<strong>Center</strong> S<br />
• 2nd place: Dell PowerEdge M610<br />
• 3rd place: Fujitsu Primergy BX900<br />
• 4th place: Fujitsu Primergy BX600 S3<br />
• 5th place: Fujitsu Primergy BX400<br />
• 6th place: IBM Blade<strong>Center</strong> H<br />
• 7th place: Dell PowerEdge M910<br />
• 8th place: Dell PowerEdge M710<br />
• 9th place: HP ProLiant BL460c<br />
• 10th place: Dell PowerEdge M710HD<br />
IBM Blade<strong>Center</strong> S<br />
This list of manufacturers must also be extended by Oracle (Sun Microsystems), Transtec, Cisco with its UCS<br />
systems (Unified Computing System) and other popular manufacturers.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 81 of 156
www.datacenter.rdm.com<br />
The key performance features of the first three models are listed below:<br />
IBM Blade<strong>Center</strong> S Dell PowerEdge M610 Fujitsu Primergy BX900<br />
Construction 7U rack 10U rack 10U rack<br />
Server blade module Blade<strong>Center</strong> HS22 PowerEdge M610 Primergy BX920 S1<br />
Front mounting slots<br />
6 x 2 CPU plug-in unit and<br />
others<br />
Processor Intel Xeon 5500<br />
Network ports<br />
Power supply<br />
2 x Gbit-Ethernet and<br />
TCP/IP OffloadEngine TOE<br />
4 x 950 / 1,450 watts<br />
16 x half height or 8 x full<br />
height<br />
Intel Xeon 5500 and 5600<br />
series<br />
2 x Gbit Ethernet (CMC); 1<br />
x ARI (iKVM)<br />
3 x non-redundant or<br />
6 x redundant, 2,360 watts<br />
18 x half height or 9 x full<br />
height<br />
Intel Xeon E5500 series<br />
4 x Gbit Ethernet<br />
6 x 1,165 watts<br />
Mainframes<br />
The mainframe computer platform (also known as large computer or<br />
host) that was once given up for dead has been experiencing a second<br />
life. At this point, IBM, with its System-z machines (see image to left), is<br />
practically the only supplier of these monoliths.<br />
The mainframe falls under the category of servers as well.<br />
System i (formerly called AS/400 or eServer, iSeries or System i5) is a<br />
computer series from IBM.<br />
IBM’s System i has a proprietary operating system called i5/OS and its<br />
own database called DB2, upon which a vast number of installations run<br />
commercial applications for managing typical company business<br />
processes, as a server or client/server application. These can typically be<br />
found in medium-sized companies.<br />
The System i also falls under the server category. Some of these systems<br />
can be installed in racks.<br />
Storage Systems<br />
Storage systems are systems for online data processing as well as for data storage, archiving and backup.<br />
Depending upon the requirements for their application and for access time, storage systems can either operate as<br />
primary components, such as mass storage systems in the form of hard disk storage or disk arrays, or as<br />
secondary storage systems such as jukeboxes and tape backup systems.<br />
There are various transmission technologies available for storage networks:<br />
• DAS – Direct Attached Storage<br />
• NAS – Network Attached Storage<br />
• SAN – Storage Area Networks<br />
• FC – Fibre Channel (details in section 3.8.3)<br />
• FCoE – Fibre Channel over Ethernet (details in section 3.8.4)<br />
Some of these storage architectures have already been mentioned in section 1.2 that describes the basic elements<br />
of a data center.<br />
Storage Networks<br />
NAS and SAN are the best-known approaches for storing data in company networks. New SAN technologies like<br />
Fibre Channel over Ethernet have been gaining in popularity, because with FCoE the I/O consolidation from<br />
merging with Ethernet LAN is attractive for infrastructure reasons (see section 3.7.2.). Nevertheless, all storage<br />
solutions have disadvantages as well as advantages, and these need to be considered in order to implement<br />
future-proof storage solutions.<br />
Page 82 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
DAS – Direct Attached Storage<br />
Direct Attached Storage (DAS) or Server Attached Storage is a term for hard disks in a separate housing and that<br />
are connected to a single host.<br />
The usual interfaces for this implementation are SCSI (Small Computer System Interface) and SAS (Serial<br />
Attached SCSI). The parallel SCSI interface with its final standard Ultra-320 SCSI was the forerunner to SAS.<br />
However, it represented the physical boundaries of SCSI, since the signal propagation delay of individual bits on<br />
the parallel bus were too different. The clock rate on the bus had to be limited so that the slowest and fastest bit<br />
could still be evaluated at the bit sample time. However, this process conflicted with the goal of continuously<br />
increasing bus performance. As a result, a new interface was designed, one that was serial and therefore offered<br />
greater performance reserves. Since the Serial ATA (S-ATA) serial interface had already been introduced into<br />
desktop PCs a few years earlier, it also made sense to keep SCSI’s successor compatible with S-ATA to a great<br />
extent, in order to reduce development and manufacturing costs through its reusability.<br />
Nevertheless, all block-oriented transmission protocols can be used for direct (point to point) connections.<br />
NAS – Network Attached Storage<br />
Network Attached Storage (NAS) describes an easy to manage file server. NAS is generally used to provide<br />
independent storage capacity in a computer network without great effort. NAS uses the existing Ethernet network<br />
with a TCP/IP protocol like NFS (Network File System) or CIFS (Common Internet File System), so computers that<br />
are connected to the network can access the data media. They often operate purely as file servers.<br />
An NAS generally provides many more functions than just assigning computer storage over the network. In<br />
contrast to Direct Attached Storage, an NAS is therefore always either an independent computer (host) or a virtual<br />
computer (Virtual Storage Appliance, VSA for short) with its own operating system, and is integrated into the<br />
network as such. Many systems therefore also possess RAID functions to prevent data loss arising from defects.<br />
NAS systems can manage large volumes of data for company uses. Extensive volumes of data are also made<br />
quickly accessible to users through the use of high-performance hard disks and caches. Professional NAS<br />
solutions are well-suited for consolidating file services in companies. NAS solutions are high-performance,<br />
redundant and therefore fail-safe, and represent an alternative to traditional Windows/Linux/Unix file servers.<br />
Due to the hardware they use and their ease of administration, NAS solutions are significantly cheaper to implement<br />
than comparable SAN solutions. However, this is at the expense of performance.<br />
SAN – Storage Area Networks<br />
A Storage Area Network (SAN) in a<br />
data processing context denotes a<br />
network used to connect hard disk<br />
subsystems and tape libraries to server<br />
systems.<br />
LAN aggregation<br />
& core switch<br />
SANs are designed for serial, continuous,<br />
high-speed transfers of large<br />
volumes of data (up to 16 Gbit/s). They<br />
are currently based on the implementation<br />
of Fibre Channel standards<br />
for high-availability, high-performance<br />
installations. Servers are connected<br />
into the FC-SAN using Host Bus<br />
Adapters (HBA).<br />
LAN<br />
access<br />
switch<br />
Ethernet<br />
Ethernet<br />
FC<br />
FC<br />
FC SAN<br />
switch<br />
The adjacent graphic shows a typical<br />
network configuration in which the SAN<br />
network is being run as a separate<br />
network by means of Fibre Channel.<br />
With the continued development of<br />
FCoE (Fibre Channel over Ethernet),<br />
this network will merge with Ethernetbased<br />
LAN in the future.<br />
Server<br />
FO<br />
Copper<br />
FC Storage<br />
FC and the migration to FCoE is shown in sections 3.7.2 and 3.8.3 as well as 3.8.4.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 83 of 156
www.datacenter.rdm.com<br />
Storage Solutions<br />
The demand for storage capacities has been steadily growing as<br />
a result of increasing volumes of data as well as require-ments<br />
for data backup and archiving. Cost efficiency in the area of<br />
company storage is therefore still a central theme for IT<br />
managers. Their goal is to achieve high storage efficiency by<br />
using their existing means and hardware in combination with<br />
storage virtualization, Cloud Storage (Storage as a Service) and<br />
consolidation.<br />
The popularity of the Unified Storage approach, a system that<br />
supports all storage protocols, has been picking up speed in<br />
new storage acquisitions.<br />
Racks that contain only hard disks are finding their way into more and more data centers and are being supplied<br />
preassembled by solution providers such as:<br />
• NetApp • EMC • HDS Hitachi <strong>Data</strong> Systems<br />
• HP • IBM • Oracle (Sun Microsystems)<br />
Secondary storage solutions like jukeboxes and tape backup systems no longer play a part in current storage<br />
solutions. However, IT managers must not ignore the conventional backup as well as data storage that is legally<br />
required – key word compliance – for all the storage technologies they use.<br />
3.6.3. Network Infrastructure Basics (NICs, Switches, Routers & Firewalls)<br />
Networks can be classified in different ways and through the use of different criteria. A basic distinction is made<br />
between private (LAN) and public (WAN) networks, which also use different transmission technologies. One<br />
typically comes across base-band networks in LANs (Ethernet, and also Token Ring and FDDI in earlier times)<br />
and broadband networks in WANs (ATM, xDSL, etc.). Routers which possess a LAN interface as well as a WAN<br />
port assume the task of LAN/WAN communication transition. The Ethernet is also definitely pushing its way more<br />
and more to the front of technologies for public networks, as a result of its continuing development and easy<br />
implementation, particularly for providers who operate city-wide networks (MANs).<br />
The components in a network infrastructure can basically be divided up into two groups:<br />
• Passive network components which provide the physical network structure, such as cables (copper and<br />
glass fiber), outlets, distribution panels, cabinets, etc.<br />
• Active network components, which process and forward (amplify, convert, distribute, filter, etc.) data.<br />
The following sections cover active network components in detail, to give an understanding of their function and<br />
area of application.<br />
NIC – Network Interface Card<br />
A network interface card (NIC) usually exists “onboard” a network-compatible device and creates the physical<br />
connection to the network, by means of an appropriate access method. Ethernet in all its forms (different media<br />
and speeds) is currently being used almost exclusively as an access method, also known as a MAC protocol<br />
(Media Access Control).<br />
Each NIC has a unique hardware address, also called a MAC address, that is required by switches in order to<br />
forward Ethernet frames. The first 3 bytes of this 6-byte MAC address is assigned by IEEE for the manufacturer,<br />
who uses the remaining 3 bytes as a serial number.<br />
Therefore, in addition to its configured IP address which is required for routing, every device integrated in an<br />
Ethernet LAN also possesses a MAC address, which is used by switches for forwarding.<br />
Repeaters and Hubs<br />
These network components are not used much today, since, for reasons of performance, current LAN networks<br />
are constructed only to be switched. They were used as signal amplifiers (repeater) and distributors (hub) in<br />
shared Ethernets in which network subscribers still had to share overall bandwidth. Since these devices are rarely<br />
used today, we will omit a detailed description of them.<br />
Page 84 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Switches<br />
A “standard” layer 2 switch divides a network up into physical<br />
subnets and thus increases network bandwidth (from shared<br />
to switched). This means that each individual subscriber can<br />
now use the bandwidth commonly available to hubs, if the<br />
subscriber alone is connected to a switch port (segment).<br />
Segmenting is achieved by the switch remembering what destination MAC address can be reached at what port.<br />
To do this, the switch creates a “Source Address Table” (SAT) for itself and remembers the physical port (switch<br />
connection) to which information is sent and received, that corresponds to the NIC address of the terminal device.<br />
If the destination address that was received is not yet known, or in other words not yet available in the SAT table,<br />
the switch forwards this frame to all ports, an operation known as broadcasting. When response frames come<br />
back from recipients, the switch then makes a note of their MAC addresses and the associated port (entry in the<br />
table) and then sends the data only there.<br />
Switches therefore learn the MAC addresses of connected devices automatically, which is why they do not have<br />
to be configured unless additional specific functions are required, of which there can be several.<br />
A switch operates on the Link Layer (layer 2, MAC layer) of the OSI model (see section 3.8.1) and works like a<br />
bridge. Therefore, manufacturers also use terms like bridging switch or switching bridge. Bridges were the actual<br />
forerunners to switches and generally have only two ports available for sharing one LAN. A switch in this context<br />
is a multi-port bridge, and because of the interconnection of the corresponding ports, this device could also be<br />
called a matrix switch.<br />
Different switches can be distinguished based on their performance ability and other factors, by using the following<br />
features:<br />
• Number of MAC addresses that can be stored (SAT table size)<br />
• Method by which a received data packet is forwarded (switching method)<br />
• Latency (delay) of the data packets that are forwarded<br />
Switching method Description Advantages Disadvantages<br />
Cut-Through<br />
The switch forwards the frame<br />
immediately after it reads the<br />
destination address.<br />
Latency, or delay, between<br />
receiving and forwarding is<br />
extremely small.<br />
Defective data packets are<br />
not identified and forwarded<br />
to the recipient anyway.<br />
Store-and-Forward<br />
The switch receives the entire<br />
frame and saves it in a buffer. The<br />
packet is then checked and<br />
processed there using different<br />
filters. Only after that is the packet<br />
forwarded to the destination port.<br />
Defective data packets can<br />
therefore be sorted out<br />
beforehand.<br />
Storing and checking data<br />
packets causes a delay that<br />
depends on the size of the<br />
frame.<br />
Combination of Cut-<br />
Through and Storeand-Forward<br />
Fragment-Free<br />
Many switches operate using both methods. Cut-through is used as long as only a few defective<br />
frames come up. If faults become more frequent, the switch switches over to Store-and-Forward.<br />
The switch receives the first 64 bytes of the Ethernet frame. The data are forwarded if this portion<br />
has no errors. The reason behind this process is that most errors and collisions occur in the first 64<br />
bytes. In spite of its effectiveness, this method is seldom used.<br />
The expression layer 3 switch is somewhat misleading, since it describes a multi-functional device which is a<br />
combination of router and switch. Brouter was once also a term for this. In routing, the forwarding decision is<br />
made on the basis of OSI layer 3 information, i.e. an IP address. A layer 3 switch can therefore both assign<br />
different domains (IP subnets) to individual ports and also operate as a switch within these domains. However it<br />
also controls the routing between these domains.<br />
A wide variety of switch designs are available, from the smallest device with 5 ports up to a modular backbone<br />
switch which can provide hundreds of high-speed ports.<br />
There are also a multitude of extra functions available which could be listed here. The different switch types in the<br />
data center (Access, Aggregation, Core) and their primary functions were already described in section 3.3. The<br />
functions / protocols required for redundancy in the data center are listed in section 3.8.6.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 85 of 156
www.datacenter.rdm.com<br />
Router<br />
A router connects a number of network segments, which may operate with different transmission protocols<br />
(LAN/WAN). It operates on Layer 3, the Network Layer of the OSI reference model (see section 3.8.1).<br />
As network nodes between two or more different networks or<br />
subnets, a router has an interface as well as an IP address in<br />
each network. It can therefore communicate within each of<br />
these networks. When a data packet reaches the router, the<br />
router reads its destination (destination IP address) to determine<br />
the appropriate interface and shortest path over which<br />
the packet will reach the destination network.<br />
In the private sector, one is acquainted with DSL routers or<br />
WLAN routers (wireless LAN). Many routers include an<br />
integrated firewall which protects the network from unauthorized<br />
accesses. This is one step to increasing security<br />
in networks.<br />
The Cisco Nexus 7000 series is a modular, data<br />
center class switching system.<br />
Many manufacturers do not put high-speed routers (carrierclass<br />
routers, backbone routers or hardware routers) under a<br />
separate heading. They market routers together with highend<br />
switches (Layer 3 and higher, Enterprise Class). This<br />
makes sense, as today’s switches in the high-end range<br />
often possess routing functionality as well.<br />
Compared to switches, routers ensure better isolation of data traffic, since, for example, they do not forward<br />
broadcasts by default. However, routers generally slow down the data transfer process. Nevertheless, in branched<br />
networks, especially in WANs, they route data to the destination more effectively. On the other hand, routers<br />
are generally more expensive than switches. Therefore, when considering a purchase, an analysis must be made<br />
of what requirements must be satisfied.<br />
Load Balancing<br />
Load balancing is a common data center application which can be implemented in large switching-routing engines.<br />
One generally uses the term to mean server load balancing (SLB), a method used in network technology to<br />
distribute loads to multiple, separate hosts in the network. The load balancer serves different layers in the OSI<br />
model.<br />
Server load balancing comes into use anywhere a great number of clients create a high density of requests and<br />
thus overload a single server. Typical criteria for determining a need for SLB include data rate, the number of<br />
clients and request rate.<br />
Firewall<br />
A firewall is a software component that is used to restrict access to the network on the basis of the address of the<br />
sender or destination and services used, up to OSI layer 7. The firewall monitors the data running through it and<br />
uses established rules to decide whether or not to let specific network packets through. In this way, the firewall<br />
attempts to stop illegal accesses to the network. A security vulnerability in the network can therefore be the basis<br />
for unauthorized actions to be performed on a host.<br />
A distinction is made, on the basis of where the firewall<br />
software is installed, between a personal firewall (also<br />
known as desktop firewall) and an external firewall (also<br />
known as network or hardware firewall). In contrast to a<br />
personal firewall, the software for an external firewall does<br />
not operate on the system to be protected, but runs on a<br />
separate device (appliance) which connects the networks or<br />
network segments to one another and also restricts access<br />
between the networks, by means of the firewall software.<br />
Other firewall functions include intrusion detection and<br />
prevention (IDP), which checks the data transfer for abnormalities,<br />
as well as content/URL filtering, virus checking<br />
and spam filtering. High-end devices use these functions to<br />
check data transfers up to 30 Gbit/s.<br />
With the introduction of its SuperMassive E10000,<br />
Sonicwall has announced a high-end firewall series.<br />
This allows companies to protect entire data centers<br />
from intruders.<br />
Page 86 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.6.4. Connection Technologies / Interfaces (RJ45, SC, LC, MPO, GBICs/SFPs)<br />
Active network components have different interfaces available, and different methods exist for linking up different<br />
devices:<br />
• Permanently installed interfaces (the network technology in use must be known for the purchase decision)<br />
• Flexible interfaces, so-called GBICs (gigabit Interface Converters) / SFPs (Small Form-factor Pluggable)<br />
<strong>Data</strong> centers support different transmission media<br />
(listed in section 3.9), depending on the network technology<br />
they use (transmission protocols listed in<br />
section 3.8), which can be classified into two groups:<br />
• Copper cables with RJ45 plug type<br />
• Fiber optic cables (multi-mode / single-mode)<br />
with SC / SC-RJ, LC / duplex plug types<br />
GBICs / SFPs are available for both media types.<br />
Cisco Nexus 7000 series 48-port gigabit Ethernet SFP module<br />
Gigabit Ethernet can be implemented using copper cables without a problem for normal distances in LAN. This<br />
becomes more difficult in 10 gigabit Ethernet, even if network interface cards with a copper interface for 10 GbE<br />
are available. The IEEE 802.3an standard defines transfers with a maximum of 10 Gbit/s via twisted-pair cabling<br />
over a maximum length of 100 m.<br />
In the backbone area, fiber optic cables are usually used for 10 gigabit Ethernet, with LC plugs used for greater<br />
distances in single-mode implementation.<br />
So-called next-generation networks – 40/100 gigabit Ethernet networks with a maximum data transmission rate<br />
of 40 Gbit/s and 100 Gbit/s respectively – must, by standard, be implemented using high-quality OM3 and OM4<br />
fiber optic cables and MPO-based (multipath push-on) parallel optic transceivers in combination with equally<br />
high-quality MPO/MTP ® connection technology.<br />
12-fiber MPO/MTP ® plugs are used for the transfer of 40 gigabit Ethernet. The outer four fibers are used to send<br />
and receive signals, while the remaining middle channels remain unused. By contrast, the transfer of 100 Gbit/s<br />
requires 10 fibers for transmission and reception. This is realized either by means of the 10 middle channels in<br />
two 12-fiber MPOs/MTPs ® , or a 24-fiber MPO/MTP ® plug can be used instead. Additional information on this<br />
follows in sections 3.10.1 and 3.10.2.<br />
GBIC / SFP<br />
The prevailing design in expansion slots is Small Form-Factor Pluggable (SFP),<br />
also known as Mini-GBIC, SFF GBIC, GLC or "New GBIC" or "Next Generation<br />
GBIC". The regular design is usually just called Mini-GBIC for short. Mini-GBICs<br />
are pushing their physical limits in gigabit Ethernet, since they are only specified<br />
up to 5 Gbit/s. Higher speeds like 10 gigabit Ethernet require, in addition to SFP+,<br />
the somewhat bigger XFP- or XENPAC modules. XENPAC already has two<br />
successors, XPAK and X2.<br />
Netgear module / 1000Base-T<br />
SFP / RJ-45 / GBIC<br />
The QSFP module (Quad Small Form-Factor Pluggable) is a transceiver<br />
module for 40 gigabit Ethernet and is meant to replace four SFP modules. The<br />
compact QSFP module has dimensions of 8.5 mm x 18.3 mm x 52.4 mm, and<br />
can be used for 40 gigabit Ethernet, Fibre Channel and InfiniBand.<br />
The idea for new designs arises from the need to house more connections in<br />
the same area.<br />
Mellanox ConnectX®-2 VPI adapter<br />
card, 1 x QSFP, QDR<br />
• 1 gigabit Ethernet: Mini-GBIC, SFP<br />
• 10 gigabit Ethernet: SFP+, XFP, XENPAC, XPAK, X2<br />
• 40 gigabit Ethernet: QSFP, MPO<br />
• 100 gigabit Ethernet: MPO<br />
The official GBIC standard is governed by the SFF Committee (http://www.sffcommittee.org).<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 87 of 156
www.datacenter.rdm.com<br />
A GBIC module is inserted into an electrical interface,<br />
in order to convert it to an optical connection,<br />
for example. The type of the signal to be transmitted<br />
can be adapted to specific transmission requirements<br />
by using GBICs. Individual GBICs, also known<br />
as transceivers, are available both as electrical as<br />
well as optical interfaces that are made up of separate<br />
transmitting and receiving devices.<br />
A transceiver module can be replaced during normal<br />
operation (hot-swappable) and operates as a fiber<br />
module, typically for wavelengths of 850 nm, 1310 nm<br />
and 1550 nm.<br />
There are also third-party suppliers who provide transceivers, including models with an intelligent configurator for<br />
flexible use in any system. However, one must first check whether the switch/router manufacturer is blocking<br />
these “open” systems before these models can be used.<br />
Example of an SFP switching solution from Hewlett Packard:<br />
HP E6200-24G-mGBIC yl switch<br />
Key Features<br />
• Distribution layer<br />
• Layer 2 to 4 and intelligent edge feature set<br />
• High performance, 10-GbE uplinks<br />
• Low-cost mini-GBIC connectivity, Ports 24 open mini-GBIC (SFP) slots<br />
• Supports a maximum of 4 10-GbE ports, with optional module<br />
Transceiver modules:<br />
Product name<br />
HP X131 10G X2 SC ER<br />
Transceiver<br />
HP X130 CX4 Optical Media<br />
Converter<br />
HP X131 10G X2 SC SR<br />
Transceiver<br />
HP X111 100M SFP LC FX<br />
Transceiver<br />
HP X131 10G X2 SC LR<br />
Transceiver<br />
HP X131 10G X2 SC LRM<br />
Transceiver<br />
HP X112 100M SFP LC BX-<br />
D Transceiver<br />
HP X112 100M SFP LC BX-<br />
U Transceiver<br />
HP X121 1G SFP LC LH<br />
Transceiver<br />
HP X121 1G SFP LC SX<br />
Transceiver<br />
HP X121 1G SFP LC LX<br />
Transceiver<br />
HP X121 1G SFP RJ45 T<br />
Transceiver<br />
HP X122 1G SFP LC BX-D<br />
Transceiver<br />
HP X122 1G SFP LC BX-U<br />
Transceiver<br />
Product code Performance description<br />
J8438A<br />
J8439A<br />
J8436A<br />
J9054B<br />
J8437A<br />
J9144A<br />
J9099B<br />
J9100B<br />
J4860C<br />
J4858C<br />
J4859C<br />
J8177C<br />
J9142B<br />
J9143B<br />
An X2 form-factor transceiver that supports the 10-gigabit ER standard, providing 10-<br />
gigabit connectivity up to 30km on singlemode fiber (40km on engineered links).<br />
Optical media converter for CX4 (10G copper) mmf cable up to 300 m<br />
An X2 form-factor transceiver that supports the 10-gigabit SR standard, providing 10-<br />
gigabit connectivity up to 300 meters on multimode fiber.<br />
A small form-factor pluggable (SFP) 100Base-FX transceiver that provides 100 Mbps fullduplex<br />
connectivity up to 2 km on multimode fiber.<br />
An X2 form-factor transceiver that supports the 10-gigabit LR standard, providing 10-<br />
gigabit connectivity up to 10 km on single-mode fiber.<br />
An X2 form-factor transceiver that supports the 10-gigabit LRM standard, providing 10-<br />
gigabit connectivity up to 220 m on legacy multimode fiber.<br />
A small form-factor pluggable (SFP) 100-Megabit BX (bi-directional) "downstream" transceiver<br />
that provides 100 Mbps fd connectivity up to 10 km on one strand of singlemode.<br />
A small form-factor pluggable (SFP) 100-Megabit BX (bi-directional) "upstream" transceiver<br />
that provides 100 Mbps fd connectivity up to 10 km on one strand of singlemode.<br />
A small form-factor pluggable (SFP) gigabit LH transceiver that provides a full-duplex<br />
gigabit solution up to 70 km on single-mode fiber.<br />
A small form-factor pluggable (SFP) gigabit SX transceiver that provides a full-duplex<br />
gigabit solution up to 550 m on multimode fiber.<br />
A small form-factor pluggable (SFP) gigabit LX transceiver that provides a full-duplex<br />
gigabit solution up to 10 km (single-mode) or 550 m (multimode).<br />
A small form-factor pluggable (SFP) gigabit copper transceiver that provides a full-duplex<br />
gigabit solution up to 100 m on Category 5 or better cable.<br />
A small form-factor pluggable (SFP) gigabit-BX (bi-directional) "downstream" transceiver<br />
that provides a full-duplex gigabit solution up to 10 km on one strand of singlemode fiber.<br />
A small form-factor pluggable (SFP) gigabit-BX (bi-directional) "upstream" transceiver<br />
that provides a full-duplex gigabit solution up to 10 km on one strand of singlemode fiber.<br />
Page 88 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.6.5. Energy Requirements of Copper and Fiber Optic Interfaces<br />
Acquisition costs for active network components with fiber optic connections are significantly higher than those for<br />
active network components with RJ45 connections for copper cables. However, one also must not ignore their<br />
different energy requirements. If you compare a copper solution to a fiber optic solution on the basis of their power<br />
consumption, a surprising picture results.<br />
Energy requirement when using a 10 Gbit/s switch chassis:<br />
336 fiber optic connections Power<br />
7 modules with 48 FO ports at 2 W* 672 W<br />
Total ventilation<br />
188 W<br />
NMS module<br />
89 W<br />
10 Gb fiber uplinks 587 W<br />
336 copper connections Power<br />
7 modules with 48 copper ports at 6 W* 2,016 W<br />
Total ventilation<br />
188 W<br />
NMS module<br />
89 W<br />
10 Gb fiber uplinks 587 W<br />
Total (with fiber optic ports)<br />
1.54 kW<br />
Total (with copper ports)<br />
2.88 kW<br />
* Average based on manufacturer specifications<br />
A switch equipped with fiber optic ports needs only half the energy of that required by a copper solution!<br />
If you take total energy requirements into consideration (see section 1.9), the following image now results:<br />
Power consumption – FO switch<br />
1.54 kW<br />
Power consumption – copper switch<br />
2.88 kW<br />
DC/DC conversion<br />
0.28 kW<br />
DC/DC conversion<br />
0.52 kW<br />
AC/DC conversion<br />
0.48 kW<br />
AC/DC conversion<br />
0.89 W<br />
Power distribution<br />
0.06 kW<br />
Power distribution<br />
0.12 W<br />
Uninterrupted power supply<br />
0.22 kW<br />
Uninterrupted power supply<br />
0.40 W<br />
Cooling<br />
1.65 kW<br />
Cooling<br />
3.08 W<br />
Switchgear in energy distribution<br />
0.15 kW<br />
Switchgear in energy distribution<br />
0.29 W<br />
Total power requirement<br />
4.4 kW<br />
Total power requirement<br />
8.2 kW<br />
If these values are now compared using different PUE factors, significant differences in operating costs result (see<br />
section 2.4.3). Operating costs exceed acquisition costs after only a few years. The proportionate data center costs<br />
(like maintenance, monitoring, room costs, etc.) have yet to be considered.<br />
Power costs for switching hardware<br />
Fiber optic switch Copper switch<br />
Power consumption (during operation) 4.4 kW 8.2 kW<br />
Hours per year<br />
24 h x 30.5 days x 12<br />
months = 8,784 h<br />
Power consumption per year 38,650 kWh 72,029 kWh<br />
Power costs per kWh 0,15 €<br />
Power costs per year 5,797.50 € 10,804.35 €<br />
Energy costs by energy efficiency<br />
PUE factor = 3.0 17,392.50 € / year 32,413.05 € / year<br />
PUE factor = 2.2 12,754.50 € / year 23,769.55 € / year<br />
PUE factor = 1.6 9,276.00 € / year 17,286.95 € / year<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 89 of 156
www.datacenter.rdm.com<br />
3.6.6. Trends<br />
Some trends which increase demands on network hardware and those of the overall data center are listed below.<br />
x<br />
• <strong>Data</strong> volumes as well as demands on broadband are increasing (Context Aware Computing) as a result of<br />
changes in the use of ICT everywhere and for everything (a result of the combination/convergence of<br />
WLAN technology and mobile terminal devices). The increasing use of live videos over networks is<br />
contributing to this trend.<br />
• Latency times are being accepted less and less (i.e. real-time applications).<br />
• The higher power densities of more efficient servers require data center changes.<br />
• High ICT availability is becoming increasingly important in companies.<br />
• Companies must professionalize their data centers, whether this be through the construction of new<br />
facilities, modernization or outsourcing.<br />
• Cloud computing is resulting in higher data network loads.<br />
• Virtualization-related themes are gaining greater importance.<br />
• Archiving is required for purposes of both legislation as well as risk management. This issue has been<br />
promoted for years, but has not yet been fully resolved up to this point. When companies do take on these<br />
topics, the related area of storage and networks will gain urgency.<br />
• Energy efficiency, sustainability and reducing costs will continue to drive the market. It is expected that<br />
these themes will extend to active components in data center infrastructures.<br />
• Plug-and-play solutions will make an entrance, since the labor market does not provide the required<br />
personnel and/or work must be completed in shorter and shorter amounts of time. In addition, data<br />
centers are service providers in companies and must deliver “immediately” if projects require them to do<br />
so. Oftentimes those responsible in the data center are not consulted during project planning.<br />
• The companies within a value chain will continue to build up direct communication. Other communities will<br />
develop (such as the Proximity Services of the German stock market) and/or links from data centers will<br />
be created.<br />
3.7. Virtualization<br />
Traditionally all applications are run separately on individual servers; servers are loaded differently and in most<br />
cases, for reasons of performance, more servers are in operation that are required. However, developments in<br />
operating systems allow more than one application to be provided per server. As a result, operating resources are<br />
better loaded to capacity through this “virtualization” of services, and the actual number of physical servers is<br />
drastically reduced.<br />
A distinction is made between two different types of virtualization:<br />
• Virtualization by means of virtualization software (e.g. VMware)<br />
• Virtualization at the hardware level (e.g. AMD64 with Pacifica)<br />
The virtualization of servers and storage systems continues to make great strides, as we already mentioned in<br />
sections 1.8 and 1.10. The advantages of this development are obvious.<br />
• Less server hardware is required.<br />
• Energy requirements can be reduced.<br />
• Resource management is simplified.<br />
• The patching between components is clearer.<br />
• Security management can be improved.<br />
• Recovery is simplified.<br />
• Application flexibility is increased.<br />
• Acquisition and operating costs can be reduced.<br />
• and much more.<br />
Page 90 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The advance of virtualization can include the following effects on data center cabling:<br />
• Connections that are higher performance, so-called “fat pipes”, are required because of increased<br />
network traffic (data volumes).<br />
• More connections are required for each server and between switches because of aggregation (trunking).<br />
• An increased number of additional redundant connections are created as a result of increased demands<br />
on data center failure safety.<br />
3.7.1. Implementing Server / Storage / Client Virtualization<br />
The primary goal of virtualization is to provide the user a layer of abstraction that isolates him or her from the<br />
actual hardware – i.e. computing power and disk space. A logical layer is implemented between users and<br />
resources so as to hide the physical hardware environment. In the process, every user is led to believe (as best as<br />
possible) that he or she is the sole user of a resource. Multiple heterogeneous hardware resources are combined<br />
into one homogeneous environment. It is generally the operating system’s job in this process to manage in a way<br />
that is invisible and transparent to users.<br />
The level of capacity utilization for all infrastructure components increases in data centers as a result of the<br />
increased use of virtual systems like blade servers and Enterprise Cloud Storage. For example, where servers<br />
were previously loaded to only 15 to 25 percent, this value increases to 70 or 80 percent in a virtualized<br />
environment. As a result, systems consume significantly more power and in turn generate many times more waste<br />
heat.<br />
The adjacent graphic<br />
shows the development<br />
of server compaction.<br />
Client Virtualization, or so-called desktop virtualization, is a result of the ongoing development of Server and<br />
Storage Virtualization. In this process, a complete PC desktop is virtualized in the data center instead of just an<br />
individual component or application. This is a process that allows multiple users to execute application programs<br />
on a remote computer simultaneously and independent of one another.<br />
The virtualization of workstations involves individually configured operating system instances that are provided for<br />
individual users through a host. Each user therefore works in his or her own virtual system environment that in<br />
principle behaves like one complete local computer. This is a different process from providing a terminal server in<br />
which multiple users share resources in a specially configured operating system.<br />
The advantages of a virtual client lie in its individuality and its ability to run hosts at a central location. Resources<br />
are optimized through the common use of hardware.<br />
Disadvantages arise as a result of operating systems which are provided redundantly (and the associated<br />
demand for resources), as well as the necessity of providing network communication to operate these systems.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 91 of 156
www.datacenter.rdm.com<br />
Classic Structure<br />
The adjacent graphic shows an example<br />
of a classic server and storage<br />
structure in a data center with separate<br />
networks.<br />
Each server is connected physically to<br />
two networks, the LAN for communication<br />
and the SAN for storage<br />
media.<br />
LAN aggregation<br />
& core switch<br />
LAN<br />
access<br />
switch<br />
Ethernet<br />
This leads to an inefficient use of all<br />
available storage capacities, since<br />
each storage of an application is<br />
handled separately and its reserved<br />
resources must be made available.<br />
Server<br />
…..<br />
…..<br />
Servers are typically networked together<br />
via copper cables in the<br />
LAN/access area in the data center.<br />
Glass fibers are the future-proof cable<br />
of choice in the area of aggregation<br />
and cores.<br />
…..<br />
FC<br />
…..<br />
Fibre Channel is the typical application<br />
for data center storage.<br />
Storage<br />
…..<br />
…..<br />
FO<br />
Copper<br />
LAN aggregation<br />
& core switch<br />
LAN<br />
access<br />
switch<br />
Ethernet<br />
SAN Structure<br />
A network with SAN switches can<br />
greatly increase the efficiency of<br />
storage space in the network.<br />
<strong>Data</strong> can be brought together into<br />
fewer systems, which perform better<br />
and are better utilized.<br />
Server<br />
…..<br />
…..<br />
This behavior in the LAN, or more<br />
precisely SAN, is like that of the<br />
Internet, where storage space exists in<br />
a “cloud” whose actual locality is unimportant.<br />
FC SAN<br />
switch<br />
Virtual<br />
Storage<br />
FC<br />
Storage consolidation and storage<br />
virtualization are often terms that come<br />
up in this context.<br />
The advantage of Fibre Channel is that<br />
it optimizes the transfer of mass data,<br />
i.e. a block-oriented data transfer. This<br />
advantage over the Ethernet has led to<br />
Fibre Channel being the most<br />
commonly used transmission protocol<br />
in SANs.<br />
FO<br />
Copper<br />
Page 92 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.7.2. Converged Networks, Effect on Cabling<br />
Classic LANs and SANs use different transmission protocols,<br />
typically Ethernet for LAN and Fibre Channel for<br />
SAN.<br />
LAN aggregation<br />
& core switch<br />
However, LANs, as communication networks and SANs<br />
as storage networks are merging (converging) more and<br />
more.<br />
Current transmission protocols under continuous development<br />
support this standardization and are paving the<br />
way for a structure with FCoE, Fibre Channel over<br />
Ethernet.<br />
LAN<br />
access<br />
switch<br />
Ethernet<br />
Ethernet<br />
FC<br />
FC<br />
FC SAN<br />
switch<br />
The following 3-Stage Migration Path per the Fibre<br />
Channel Industry Association (FCIA: www.fibrechannel.org)<br />
is presented below. This process is based on hardware<br />
reusability and the life cycles of the individual components.<br />
Server<br />
FO<br />
Copper<br />
FC Storage<br />
Stage 1<br />
The introduction of a new switch, a so-called<br />
Converged Network Switch, allows existing hardware<br />
to be reused.<br />
LAN aggregation<br />
& core switch<br />
Ethernet<br />
It is recommended in smaller data centers that<br />
storage systems be directly connected via this<br />
Converged Switch. A dedicated SAN switch would<br />
be required in larger structures so as to avoid<br />
bottlenecks. Converged Adapters are required for<br />
the server connection.<br />
10G converged<br />
network switch<br />
FCoE<br />
FC<br />
FC<br />
FC<br />
FC SAN<br />
switch<br />
FCIA recommends SFP+ as its transceiver interface<br />
of choice (fiber optic & copper). Since a<br />
copper solution is possible only over 10 meters<br />
using Twinax cables, fiber optic cables must<br />
replace copper ones in these “converging” networks.<br />
Stage 2 Stage 3<br />
Server<br />
FO<br />
SFP+: Twinax or FO<br />
FC Storage<br />
The Converged Switch is upgraded with an FCoE<br />
uplink in a second migration stage and connected<br />
to FCoE in the backbone.<br />
LAN aggregation<br />
& core switch<br />
The final FCoE solution is connected directly to the<br />
core network. FC as an independent protocol has disappeared<br />
from the network.<br />
LAN aggregation<br />
& core switch<br />
10G LAN<br />
access switch<br />
FCoE<br />
FCoE<br />
FCoE<br />
SAN<br />
switch<br />
10G LAN<br />
access switch<br />
FCoE<br />
FCoE<br />
FCoE<br />
FC<br />
FCoE<br />
Server<br />
FC Storage<br />
Server<br />
FCoE Storage<br />
FO<br />
Copper<br />
FO<br />
Copper<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 93 of 156
www.datacenter.rdm.com<br />
3.7.3. Trends<br />
More than half (54 percent) of the IT infrastructure<br />
components in the SME segment<br />
in Germany have already been moved into<br />
the “cloud”. According to one study, this<br />
translates to somewhere in the middle range<br />
of this sector when compared with all of<br />
Europe. Number one is Russia with 79 percent,<br />
followed closely by Spain with 77 percent<br />
and France with 63 percent. The Dutch<br />
are currently still the most reluctant in this<br />
area, with 40 percent.<br />
Over 1,600 decision makers throughout<br />
Europe from businesses with up to 250 employees<br />
were asked about their use of virtualization<br />
and cloud computing for a study<br />
by the market research institute Dynamic Markets, commissioned by VMware. In addition to 204 mid-sized companies<br />
from Germany, companies from France, Great Britain, Italy, the Netherlands, Poland, Russia and Spain<br />
also participated in the survey carried out from the end of January to the middle of February of 2011.<br />
The cloud service used most often by midsized<br />
companies in Germany is memory<br />
outsourcing (68 percent). Applications used<br />
in the cloud include e-mail services (55<br />
percent) and office applications for processing<br />
files, tables and spreadsheets (53<br />
percent).<br />
According to the study, the company<br />
management in over a third of the<br />
companies surveyed supports the cloud<br />
strategy. This is a sign that German midsized<br />
companies do not judge cloud<br />
computing to be a topic that is purely ITrelated.<br />
The management of those companies surveyed see cloud computing’s greatest advantages in not only lower<br />
costs for IT maintenance (42 percent), hardware equipment (38 percent), as well as in reduced power costs (31<br />
percent), but also in improved productivity and cooperation from IT personnel themselves. All in all, 76 percent of<br />
German SMEs stated they achieved lasting benefits through cloud computing. Accordingly, about a third of the<br />
companies surveyed are also planning to extend their cloud activities in the next twelve months.<br />
Virtualization as a Prerequisite for Cloud Computing<br />
A key technical prerequisite for providing IT services quickly and flexibly is virtualization. This was the opinion of<br />
82 percent of the SMEs in Germany. Already 75 percent of the German mid-sized companies that were surveyed<br />
have virtualized. This number is thus greater than the average for all of Europe (73 percent). Russia is again a<br />
step ahead in this area with 96 percent.<br />
"The technological basis for setting up a cloud environment is a high level of virtualization, ideally 100 percent",<br />
according to Martin Drissner of the systems and consulting company Fritz & Macziol. “That’s the only way you can<br />
guarantee that resources can be distributed and used flexibly.” According to Drissner, the cloud is not a revolution,<br />
but the next logical step in IT development. Cloud Computing would simplify and speed up processes and also<br />
lower costs. It is recommended that a private cloud, i.e. a solution internal to the company, be set up as a first step<br />
to entering the world of Cloud Computing.<br />
Reasons for Virtualization<br />
Cost-related arguments for virtualization are moving into the background more and more. Where previously points<br />
like reduction in costs for hardware (35 percent), maintenance (36 percent) and power (28 percent) were often the<br />
main drivers for virtualization in the data center, criteria like higher application availability (31 percent), greater<br />
agility (30 percent) and better disaster recovery (28 percent) are becoming more and more important, according to<br />
the survey. In addition, the aspect of gaining more time and resources is becoming an increasingly higher priority<br />
for those surveyed (27 percent), showing a desire to devote more time to projects that are more innovative than<br />
just focusing attention on purely IT operations.<br />
Page 94 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.8. Transmission Protocols<br />
In order to be able to transfer information from a transmitter to one or more recipients, the<br />
data must be packed properly for transmission, in accordance with how it is transmitted. This<br />
is the job of transmission protocols in data communication. A connection-oriented protocol<br />
first establishes a connection, then sends the data, has this confirmed and finally ends the<br />
connection when the transmission is finished. By contrast, a connectionless protocol entirely<br />
omits the steps of establishing and ending the connection and confirming the transmission,<br />
which is why such a transmission is less secure but faster. Protocols therefore establish<br />
different rules.<br />
A protocol is an agreement on how a connection, communication and data transfer between two parties must be<br />
executed. Protocols can be implemented via hardware, software, or a combination of the two.<br />
Protocols are divided up into:<br />
• Transport-oriented protocols (OSI layer 1–4) & application-oriented protocols (OSI layer 5–7)<br />
• Routable and non-routable protocols (ability to forward through routers, IP = routable)<br />
• Router protocols (path selection decision for routers, e.g. RIP, OSPF, BGP, IGRP, etc.)<br />
A transmission protocol (also known as a network protocol) is a precise arrangement by which data are exchanged<br />
between computers or processors that are connected to one another via a network (distributed system).<br />
This arrangement consists of a set of rules and formats (syntax) which specifies the communication behavior of<br />
the instances which are communicating in the computers (semantics).<br />
The exchange of messages requires an interaction of different protocols, which assume different tasks. The individual<br />
protocols are organized into layers so as to control the complexity associated with the process. In such an<br />
architecture, each protocol belongs to a specified layer and is responsible for carrying out specific tasks (for<br />
example, checking data for completeness – layer 2). Protocols at higher layers use the services from protocols at<br />
lower layers (e.g. layer 3 relies on the fact that all data arrived in full). The protocols structured in this way together<br />
form a protocol stack (this is currently typically a TCP/IP protocol stack).<br />
3.8.1. Implementation (OSI & TCP/IP, Protocols)<br />
The OSI reference model was drafted starting in 1977 by the International Organization for Standardization as a<br />
basis for developing communication standards, and published in 1984 (ISO 7498). The goal of Open Systems<br />
Interconnection (OSI) is a communication system within heterogeneous networks, especially between different<br />
computer worlds, that is based on basic services supported by applications. These basic services include, for<br />
example, file transfer, the virtual terminal, remote access to files and the exchange of electronic mail. In addition to<br />
the actual application data, these and all other communication applications require additional structural and<br />
procedural information that are specified as OSI protocols.<br />
7- Application Layer<br />
Application<br />
6- Presentation Layer<br />
5- Session Layer<br />
4- Transport Layer<br />
3- Network Layer<br />
2- <strong>Data</strong> Link Layer<br />
1- Physical Layer<br />
Computer<br />
Active components<br />
Passive components<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 95 of 156
www.datacenter.rdm.com<br />
Layer 1 – Physical Layer / Bit Transmission Layer<br />
This layer is responsible for establishing and ending connections as well as for the protocol information pattern.<br />
This specifies, among other things, what logical information “1” or “0” is represented by what voltage level.<br />
Cables and plugs, electrical and optical signals can be assigned to this layer.<br />
Layer 2 – <strong>Data</strong> Link Layer<br />
The bitstream is examined for transmission errors in the <strong>Data</strong> Link Layer, and error-correcting functions are used if<br />
necessary. This layer is also therefore called the Security Layer. It is also responsible for flow control, in case data<br />
is arriving faster than it can be processed. In cell transfers like ATM, cells are numbered so the recipient can put<br />
cells, which sometimes arrive unordered due to the use of different transmission paths, back into their original<br />
order.<br />
In LAN applications, this layer is subdivided into the sublayers called Logical Link Control (LLC) and Media Access<br />
Control (MAC). The MAC Layer assigns transmitter rights. The best known transmission protocol defined by the<br />
IEEE is Ethernet (IEEE 802.3).<br />
3<br />
IEEE<br />
802.1<br />
Network (Internet/IP)<br />
IEEE 802.2 Logical Link Control<br />
2<br />
1<br />
OSI -<br />
Layer<br />
Architecture Übersicht<br />
Management<br />
Architektur<br />
Management<br />
Overview<br />
IEEE 802.3<br />
Ethernet<br />
10Base...<br />
100Base...<br />
1000Base..<br />
10GBase...<br />
usw.<br />
IEEE 802.11<br />
Wireless LAN<br />
FrequencyHopping<br />
Direct Sequence<br />
Infrared<br />
802.11a 5 GHz<br />
802.11b/g 2.4GHz<br />
802.11n<br />
usw.<br />
IEEE 802.1 MAC Bridging<br />
Media Access Control<br />
Physical<br />
Network adapters as well as switches (layer 2) can be assigned to this layer.<br />
Layer 3 – Network Layer<br />
This layer is responsible for routing information that is transported over multiple heterogeneous networks. This<br />
model assumes that the data from the <strong>Data</strong> Link Layer (layer 2) is correct and free of errors. IP network addresses<br />
are read and evaluated, and the information is passed to the next stages (networks) through the use of routing<br />
tables. In this process, routing parameters like tariff rates, maximum bit transmission rates, network capacities,<br />
thresholds and information on service quality are reported to routers by means of appropriate configurations.<br />
In addition to routers, Layer 3 switches are also part of this layer.<br />
Layer 4 – Transport Layer<br />
This layer is responsible for controlling end-to-end communication. This information is therefore not evaluated in<br />
the intermediate stations (switches or routers). If there is information in this layer on whether a connection is unstable<br />
or has failed, this layer takes care of re-establishing the connection (handshake). In addition, this layer is<br />
sometimes responsible for address conversions.<br />
In the meantime, devices indicated as layer 4 switches are provided. This is intended to stress the fact that certain<br />
protocol functions from this layer are integrated into these devices.<br />
Layer 5 – Session Layer<br />
The primary task of the Session Layer is to fundamentally implement and complete a session from the Application<br />
Layer. In the process, this layer also specifies whether the connection is full-duplex or half-duplex.<br />
This layer is also responsible for monitoring and synchronizing the data stream as well as other tasks.<br />
Page 96 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Layer 6 – Presentation Layer<br />
Different computer systems use different structures for data formatting. Layer 6 interprets the data, and ensures<br />
the syntax is uniform and correct (character set, coding language). Here, the data format from the transmitter is<br />
converted into a format that is not specific to any terminal device. Cryptography is used in this layer for data security<br />
purposes.<br />
Layer 7 – Application Layer<br />
In this layer, the data that was transmitted is made available to the user, or rather his/her programs. Standardization<br />
is hardest for this layer because of the multitude of applications that are available. These include wellknown<br />
application protocols like http, ftp, smtp, dns, dhcp, etc.<br />
Order of events in a transmission<br />
If a piece of information is being transmitted from one device to another, the transmitter begins with the information<br />
flow in layer 7 and forwards it down, layer by layer, right down to layer 1. In the process, each layer places a socalled<br />
header containing header information in front of the actual information to be transmitted. So, for example,<br />
the IP source and destination address as well as other information are added in layer 3, and routers use this information<br />
for path selection. An “FCS” field (checksum field) is added in layer 2 for a simple security check.<br />
By adding all this extra information, the total amount of information which is transmitted on the cable becomes far<br />
greater than the actual net information. However, the length of an Ethernet frame is limited: Its minimum length is<br />
64 bytes, and its maximum 1,518 bytes. The frame length was expanded by 4 bytes to 1,522 bytes for the purpose<br />
of tagging VLANs, as a result of the Frame Extension established in IEEE 802.3as.<br />
VLAN<br />
Virtual networks or virtual LANs (VLANs) are a technological concept for implementing logical work groups within<br />
a network. A network of this type is implemented via LAN switching or virtual routing on the link layer or on the<br />
network layer. Virtual networks are extended by a great number of switches, which for their part are connected to<br />
one another through a backbone.<br />
Half- / Full-Duplex<br />
The terms full-duplex and half-duplex are understood to mean, in a telecommunications context, a transmission of<br />
information that is based on direction and time. For half-duplex, this generally means that only one transmission<br />
channel is available. <strong>Data</strong> can be either sent or received but never both at the same time.<br />
In full-duplex operation, two stations connected with one another can send and receive data simultaneously. <strong>Data</strong><br />
streams are transmitted in both directions at the same speed. In a gigabit Ethernet (1 Gbit/s) full-duplex process,<br />
for example, this translates into a transmission speed of 2 Gbit/s.<br />
OSI & TCP/IP<br />
Toward the end of the 1960s, as the “cold war” reached its high point, the United<br />
States Department of Defense (DOD) demanded a network technology that would be<br />
very secure against attacks. It should be possible for the network to continue to<br />
operate, even in case of a nuclear war. <strong>Data</strong> transmission over telephone lines was not<br />
suitable for this purpose, since these were too vulnerable to attacks. For this reason,<br />
the United States Department of Defense commissioned the Advanced Research<br />
Projects Agency (ARPA) to develop a reliable network technology. ARPA was founded<br />
in 1957 as a reaction to the launch of Sputnik by the USSR, and had the task of<br />
developing technologies that were of use to the military.<br />
In the meantime, ARPA was renamed the Defense Advanced Research Projects Agency (DARPA), since its<br />
interests primarily served military purposes. ARPA was not an organization that employed scientists and researchers,<br />
but one which handed out orders to universities and research institutes.<br />
Over time and as ARPANET grew, it became clear that the protocols selected up to that point were no longer<br />
suitable for operating a larger network that also connected many (sub)networks to one another. For this reason,<br />
other research projects were finally initiated, which led to the development of TCP/IP protocols, or the TCP/IP<br />
model, in 1974. TCP/IP (Transmission Control Protocol / Internet Protocol) was developed with the goal of<br />
connecting various networks of different types with one another for purposes of data transmission. In order to<br />
force the integration of TCP/IP protocols into ARPANET, (D)ARPA commissioned the company of Bolt, Beranek &<br />
Newman (BBN) and the University of California at Berkeley to integrate TCP/IP into Berkeley UNIX. This also laid<br />
the foundation for the success of TCP/IP in the UNIX world.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 97 of 156
www.datacenter.rdm.com<br />
Although the OSI model is recognized worldwide, TCP/IP represents the technically open standard on whose<br />
base the Internet was developed. The TCP/IP reference model and the TCP/IP protocol stack enable the exchange<br />
of data between any two computers located anywhere in the world.<br />
The TCP/IP model is historically just as significant as those standards which are the basis for successful developments<br />
in the telephone, electrical engineering, rail transportation, television and video industries.<br />
No general agreement exists on how TCP/IP should be described in a layer model. While the OSI model has an<br />
academic character, the TCP/IP model is closer to programmer reality and leans to the structure of existing protocols.<br />
However, both models – OSI as well as TCP/IP – have one thing in common: <strong>Data</strong> are passed down in the stack<br />
during transmission. When receiving data from the network, the path through the stack again leads to the top.<br />
A comparison of both models appears below.<br />
OSI layers TCP/IP layers Protocol examples Coupling elements<br />
7<br />
Application<br />
6 Presentation<br />
5 Session<br />
4 Application<br />
HTTP, FTP, SMTP,<br />
POP, DNS, DHCP,<br />
Telnet<br />
4 Transport 3 Computer TCP, UDP<br />
TCP/IP-protocol-stack<br />
Gateway, Content<br />
Switch, Layer 4 to 7<br />
Switch<br />
3 Network 2 Internet IP, ICMP, IGMP Router, Layer 3 Switch<br />
2 <strong>Data</strong> Link<br />
1<br />
Bit transfer<br />
(physical)<br />
1 Network access<br />
Ethernet, Token Ring,<br />
Token Bus, FDDI<br />
Bridge, switch<br />
Cabling, repeater, hub,<br />
media converter<br />
The best known use of protocols takes place around the Internet, where they take care of:<br />
• Loading websites – (HTTP or HTTPS)<br />
• Sending and receiving e-mails – (SMTP & POP3/IMAP)<br />
• File uploads and downloads – (FTP, HTTP or HTTPS)<br />
3.8.2. Ethernet IEEE 802.3<br />
The IEEE (Institute of Electrical and Electronics Engineers) is a professional organization that has been active<br />
worldwide since 1884 and currently has more than 380,000 members from over 150 countries (2007). This largest<br />
technical organization with headquarters in New York is divided up into numerous so-called societies that concentrate<br />
in the special fields of electrical and information technology. Project work is concentrated into approximately<br />
300 country-based groups. Through publications like its journal IEEE Spectrum, the organization also contributes<br />
to providing interdisciplinary information on and discussion of the social consequences of new technologies.<br />
IEEE 802 is an IEEE project which started in February 1980, therefore the designation 802. This project concerns<br />
itself with standards in the area of local networks (LAN) and establishes network standards for layers 1 and 2 of<br />
the OSI model. However, IEEE 802 teams also give tips for sensibly integrating systems into one overall view<br />
(network management, internetworking, ISO interaction).<br />
Different teams were formed within the 802 project, and also deal with new project aspects as needed. The oldest<br />
study group is the CSMA/CD group IEEE 802.3 for Ethernet. CSMA/CD is a description of the original access<br />
method (Carrier Sense Multiple Access with Collision Detection). The central topic for this study group is the<br />
discussion of high-speed protocols. The group is divided up into various other groups which concentrate their<br />
effects on optical fibers in the backbone, InterRepeater links, layer management, and other topics.<br />
Page 98 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The most important teams within the IEEE 802.3 study group include:<br />
• IEEE 802.3ae with 10 Gbit/s Ethernet, since 2002<br />
• IEEE 802.3af with DTE Power via MDI, since 2003 (PoE, Power over Ethernet)<br />
• IEEE 802.3an with 10GBase-T, since 2006<br />
• IEEE 802.3aq with 10GBase-LRM, since 2006<br />
• IEEE 802.3at with DTE Power Enhancements, since 2009 (PoE enhancements)<br />
• IEEE 802.3ba with 40 Gbit/s and 100 Gbit/s Ethernet, since 2010<br />
• IEEE 802.3bg with 40 Gbit/s serial, in progress<br />
The IEEE 802.3ba standard for 40GBASE-SR4 or 100GBASE-SR10 makes provisions for, regardless of data rate,<br />
maximum lengths of 100 meters for OM3 glass fibers. The OM4 glass fiber is the preferred choice for longer<br />
links (larger data centers, campus backbones, etc.) and applications with lower power budgets (e.g. device<br />
connections in data centers). These can support 40GBASE-SR4 and 100GBASE-SR10 applications up to a length<br />
of 150 meters.<br />
The 100-meter link length over OM3 supports – regardless of architecture and size – approximately 85 % of all<br />
channels in the data center. The 150-meter link length over OM4 fibers covers almost 100 % of the required range.<br />
The common lengths for both data rates are<br />
made possible through parallel optics. The signals<br />
in this process are transmitted over multiple 10<br />
Gbit/s channels. The 40 Gbit/s solution uses four<br />
glass fibers for this purpose in each of the two<br />
directions, while the 100 Gbit/s solution requires<br />
10 fibers in each direction. MPO connection<br />
technology is used as a plug connection (12-pole<br />
/ 8 required for 40 GbE or 24-pole / 20 required for<br />
100 GbE, see migration concept in section 3.10.2).<br />
Since the data rate per channel is the same as in the 10 Gbit/s Ethernet (IEEE 802.3ae) standard, the inevitable<br />
question is why the maximum length for the optical link via OM3 was reduced from 300 to only 100 meters, and<br />
from 550 to 150 meters over OM4.<br />
The goal was to reduce link costs, in particular transceiver (combined transmitter and receiver component) prices<br />
could be reduced. The effects of this cost reduction can be found in the relaxed technical specifications for transceivers,<br />
which make it possible to produce smaller, more cost-effective components.<br />
The greatest relaxing of these specifications comes in their greater tolerances for jitters. A jitter is a time-related<br />
signal fluctuation that occurs during bit transmission and is caused by factors like hissing or symbol crosstalk. In<br />
data center applications like 40-GBASE-SR4 and 100GBASE-SR10, the jitter begins in the output signal from the<br />
transmitter chip and accumulates further over the optical transmitter, the glass fiber, the optical receiver, the<br />
receiver chip and through the entire link. Since a large part of the jitter budget is now used up by electronic link<br />
components, overall jitter accumulation is only acceptable if the optical channel is virtually perfect and lossless.<br />
Lasers used in data centers are usually surfaceemitting<br />
semiconductor lasers, so-called Vertical<br />
Cavity Surface Emitting Lasers (VCSEL, pronounced<br />
“vixel”). VCSEL are therefore very attractive to<br />
networks in data centers because they are easy and<br />
cost-effective to produce. However, they do have a<br />
relatively large spectral width for lasers. And exactly<br />
the specifications for this spectral width were also<br />
relaxed during the transition from 10 to 40/100 GbE.<br />
The laser’s maximum permitted spectral width was<br />
creased from 0.45 to 0.65 nm by IEEE802.3ba, which<br />
led to increased chromatic dispersion Chromatic<br />
dispersion is the phenomenon of an optical pulse<br />
spreading during its transmission through the optical<br />
fiber. A glass fiber has different refractive indices for<br />
different wavelengths. That is why some wavelengths<br />
spread faster than others. Since a pulse consists of<br />
multiple wavelengths, it will inevitably spread out to a<br />
certain degree and thus cause a certain dispersion.<br />
Binary<br />
10-Gbit/s<br />
signal<br />
10-Gbit/s<br />
signal<br />
distorted<br />
through<br />
dispersion<br />
Binary<br />
Threshold<br />
Threshold<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 99 of 156
www.datacenter.rdm.com<br />
All optical signals consist of a wavelength range. Though<br />
this spectrum may only amount to a fraction of a<br />
nanometer, a limited spectral range always exists. The<br />
spreading of the pulse (dispersion) in a fiber is based<br />
approximately on the square of the spectral width. This<br />
effect, in combination with the additional jitters is the<br />
reason for the shorter maximum lengths of OM3 and<br />
OM4 fibers.<br />
Total system costs for optical cabling are significantly<br />
reduced as a result of the specifications being relaxed,<br />
but to the detriment of link lengths and permitted insertion<br />
losses. While maximum losses of 2.6 dB over OM3 were<br />
permitted in the 10GBASE-SR application, these dwindle<br />
down to only 1.9 dB, or 1.5 dB for OM4, in 40GBASE-SR4<br />
and 100GBASE-SR10 applications. (see tables further<br />
below)<br />
Planners, installers and users must be aware that selecting a provider like R&M, whose products guarantee<br />
minimal optical losses, is crucial for the performance of the next generation of fiber optic networks.<br />
The great success of Ethernet up to the present day can be traced back to the affordability, reliability, ease of use<br />
and scalability of the technology. For these reasons, most network traffic these days begins and ends its trip on<br />
Ethernet. Since the computing power of applications grows slower than data traffic as networks are aggregated,<br />
the development of 40 and 100 gigabit Ethernet came at just the right time to allow the Ethernet to continue its<br />
triumphant march. This is because these high speeds provide that technology a perspective for the next years in<br />
companies, in data centers and also in increasing numbers of carrier networks.<br />
Ethernet Applications for Copper Cables with Cat.5, Cat.6 & Cat.6 A<br />
Category & Class acc. ISO/IEC 11801 Cat.5 - Class D Cat.6 - Class E Cat.6 A - Class E A<br />
IEEE 802.3ab - 1000BASE-T<br />
Installation cable<br />
AWG Wire type<br />
shielded &<br />
unshielded<br />
shielded &<br />
unshielded<br />
shielded &<br />
unshielded<br />
Topology PL + Channel* PL + Channel* PL + Channel*<br />
26 Solid 60 m 70 m 55 m 65 m 55 m 65 m<br />
26 Flexible 60 m 70 m 55 m 65 m 55 m 65 m<br />
24 Solid 90 m 100 m<br />
23 Solid 90 m 100 m 90 m 100 m<br />
22 Solid 90 m 100 m 90 m 100 m<br />
IEEE 802.3an - 10GBASE-T<br />
26 Solid 65 m° 55 m 65 m<br />
26 Flexible 65 m° 55 m 65 m<br />
24 Solid<br />
23 Solid 100 m° 90 m 100 m<br />
22 Solid 90 m 100 m<br />
* Channel calculation based on 2x5m patch cord (flexible)<br />
+<br />
Permanent link and channel length reduction if patch cord (flexible cable) >10m<br />
° Cable tested up to 450MHz<br />
10GBase-T, 10 gigabit data transmission for copper cables, was standardized in the summer of 2006. It allows for<br />
use of a 4-pair cable (transmission on all 4 pairs) and a segment length of 100 m.<br />
There is a “higher speed study group” for 40GBase-T and 100GBase-T, which allows 40 Gbit/s over copper<br />
cables, at this point only over 10 m via Twinax cables or 1 m via backplane. However, a 40GBase-T standard is<br />
expected, but is rather doubtful for 100Base-T.<br />
Page 100 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Ethernet Applications for Fiber Optic Cables with OM1 & OM2<br />
Fiber type acc. ISO/IEC 11801 OM1 OM2<br />
Wavelength 850nm 1300nm 850nm 1300nm<br />
overfilled modal bandwidth (MHz*km) 200 500 500 500<br />
eff. laser launch modal bandwidth (MHz*km)<br />
y z<br />
ISO/IEC 8802-3 100BASE- F X 2km 2km<br />
LED 11.0dB 11.0dB<br />
IEEE 802.3 z 1000BASE- S X 275m 550m<br />
LED 2.6dB 3.56dB<br />
IEEE 802.3 z 1000BASE- L X 550m 550m<br />
LED 2.35dB 2.35dB<br />
IEEE 802.3 ae 10GBASE- S R 33m 82m<br />
VCSEL 2.5dB 2.3dB<br />
IEEE 802.3 ae 10GBASE- L X 4 k 300m 300m<br />
WDM 2.0dB 2.0dB<br />
IEEE 802.3 ae 10GBASE- L R M 220m 220m<br />
OFL 1.9dB 1.9dB<br />
IEEE 802.3 ae 10GBASE- S W 33m 82m<br />
VCSEL 2.5dB 2.3dB<br />
Ethernet Applications for Fiber Optic Cables with OM3, OM4 & OS2<br />
Fiber type acc. ISO/IEC 11801 OM3 OM4 OS2<br />
Wavelength 850nm 1300nm 850nm 1300nm 1310nm 1550nm<br />
overfilled modal bandwidth (MHz*km) 1500 500 3500 500<br />
eff. laser launch modal bandwidth (MHz*km) 2000 4700 NA NA<br />
y z<br />
ISO/IEC 8802-3 100BASE- F X 2km 2km<br />
LED 11.0dB 11.0dB<br />
IEEE 802.3 z 1000BASE- S X 550m 550m<br />
LED 3.56dB 3.56dB<br />
IEEE 802.3 z 1000BASE- L X 550m 550m 5km<br />
LED 2.35dB 2.35dB 4.57dB<br />
IEEE 802.3 ae 10GBASE- S R 300m 550m<br />
VCSEL<br />
2.6dB<br />
IEEE 802.3 ae 10GBASE- L R 10km<br />
Laser<br />
6.0dB<br />
IEEE 802.3 ae 10GBASE- E R 30km 11.0<br />
Laser 40km 11.0<br />
IEEE 802.3 ae 10GBASE- L X 4 k 300m 300m 10km<br />
WDM 2.0dB 2.0dB 6.0dB<br />
IEEE 802.3 ae 10GBASE- L R M 220m 220m<br />
OFL 1.9dB 1.9dB<br />
IEEE 802.3 ae 10GBASE- S W 300m<br />
VCSEL<br />
2.6dB<br />
IEEE 802.3 ae 10GBASE- L W 10km<br />
Laser<br />
6.0dB<br />
IEEE 802.3 ae 10GBASE- E W 30km 11.0<br />
Laser 40km 11.0<br />
IEEE 802.3 ba 40GBASE- L R 4 k 10km<br />
WDM<br />
6.7dB a<br />
IEEE 802.3 ba 40GBASE- S R 4 o 100m 150m<br />
VCSEL 1.9dB bc 1.5dB bd<br />
IEEE 802.3 ba 100GBASE- L R 4 k 10km<br />
WDM<br />
6.3dB a<br />
IEEE 802.3 ba 100GBASE- E R 4 k 30km (15)<br />
WDM<br />
40km (18) a<br />
IEEE 802.3 ba 100GBASE- S R 10 o 100m 150m<br />
VCSEL 1.9dB bc 1.5dB bd<br />
b<br />
a<br />
c<br />
k<br />
y<br />
z<br />
The channel insertion loss is calculated using the maximum distances specified in Table 86–2 and cabled optical fiber<br />
attenuation of 3.5 dB/km at 850 nm plus an allocation for connection and splice loss given in 86.10.2.2.1.<br />
These channel insertion loss values include cable, connectors and splices.<br />
d<br />
1.5 dB allocated for connection and splice loss.<br />
1 dB allocated for connection and splice loss.<br />
o<br />
Wavelengthdivision-mulitplexed lane assignement number of fiber pairs<br />
Wavelength: S=short 850nm / L=long 1300/1310 / E=extralong 1550<br />
Encoding: X=8B/10B data coding method / R=64B/66B data coding method / W=64B/66B with WIS WAN Interface Sublayer<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 101 of 156
www.datacenter.rdm.com<br />
Trend<br />
Servers are already being equipped<br />
with a 10 gigabit Ethernet<br />
interface as “state of the art”.<br />
Blade system manufacturers are<br />
planning to install 40 Gbit/s adapters.<br />
Switch manufacturers are likewise<br />
providing their equipment<br />
with 40 and 100 Gbit/s uplinks.<br />
The adjacent graphic shows the<br />
development of Ethernet technologies<br />
over time.<br />
(Source: Intel and Broadcom, 2007)<br />
3.8.3. Fibre Channel (FC)<br />
Fibre Channel was designed for the serial, continuous high-speed transmission<br />
of large volumes of data. Most storage area networks (SAN) today<br />
are based on the implementation of Fibre Channel standards. The data transmission<br />
rates that can be achieved with this technology reach 16 Gbit/s. The<br />
most common transmission media are copper cables within storage devices,<br />
and fiber optic cables for connecting storage systems to one another.<br />
As in conventional networks (LAN) in which every network interface card has<br />
a MAC address, each device in Fibre Channel has a WWNN (World Wide<br />
Node Name) as well as a WWPN (World Wide Port Name) for every port in<br />
the device. These are 64-bit values that uniquely identify each Fibre Channel<br />
device. Fibre Channel devices can have more than just one port available; in<br />
this case the device still has only one WWNN, but it has WWPNs of a number<br />
equal to that of the number of ports it possesses. The WWNN and WWPNs<br />
are generally very similar.<br />
Apple Quad-Channel 4Gb Fibre Channel<br />
PCI Express Card host bus adapter -<br />
PCI Express x8 - 4 connections<br />
Fibre Channel is assigned a different protocol as a connection medium in the higher layers of the OSI layer model,<br />
e.g. the SCSI we mentioned or even IP. This results in the advantage that only minor changes are required for<br />
drivers and software. Fibre Channel has a user data capacity of over 90 %, while Ethernet can fill only between<br />
20 % and 60 % of the maximum possible transmission rate with a useful load.<br />
Topologies<br />
In general, there are two<br />
different types of Fibre<br />
Channel implementations;<br />
Switched Fabric, usually<br />
known as Fibre Channel or<br />
FC-SW for short, and Arbitrated<br />
Loop, or FC-AL.<br />
In Fibre Channel Switched<br />
Fabric, point-to-point connections<br />
are switched between<br />
terminal devices, while<br />
Fibre Channel Arbitrated<br />
Loop is a logical bus in<br />
which all terminal devices<br />
share the common data<br />
transmission rate.<br />
Page 102 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Arbitrated Loop (FC-AL) is also known as Low Cost Fibre Channel, since it is a popular choice for entry into the<br />
world of Storage Area Networks. FC-AL implementations are frequently found in small clusters in which it is<br />
possible for multiple physical nodes to directly access a common mass storage system. FC-AL makes it possible<br />
to operate up to 127 devices on one logical bus. All devices share the data transmission rate that is available<br />
there (133 Mbit/s to 8 Gbit/s, depending upon the technology in use). Cabling is implemented in a star shape over<br />
a Fibre Channel hub. However, it is also possible to connect devices one after the other, since many Fibre<br />
Channel devices have two inputs or outputs available. Implementing cabling as a ring is not common.<br />
Fibre Channel Switched Fabric (FC-SW) is the implementation of Fibre Channel which provides the highest performance<br />
and greatest degree of failure safety. Switched fabric is generally meant when the term Fibre Channel is<br />
used. The Fibre Channel switch or director is located at the center of the switched fabric. All other devices are<br />
connected with one another through this device, so it is possible to switch direct point-to-point connections between<br />
any two connected devices via the Fibre Channel switch.<br />
In order to increase failure safety even more, many installations are moving to redundant dual fabric in their Fibre<br />
Channel implementations. This system operates two completely independent switched fabrics; each storage subsystem<br />
and each server is connected to each of the two fabrics using at least one HBA (Host Bus Adapter). In<br />
addition to the failure of specific data paths, the overall system can even cope with the failure of one entire fabric,<br />
since there is no longer one single point of failure. This capability plays an important part in the area of high<br />
availability, and therefore in the data center as well.<br />
Fibre Channel Applications for Fiber Optic Cables with OM1 – OM4 & OS2<br />
Fiber type acc. ISO/IEC<br />
11801<br />
OM1 OM2 OM3 OM4 OS2<br />
Wavelength 850nm 1300nm 850nm 1300nm 850nm 1300nm 850nm 1300nm 1310nm 1550nm<br />
Overfilled modal<br />
bandwidth (MHz*km)<br />
200 500 500 500 1500 500 3500 500<br />
Eff. laser launch modal<br />
bandwidth (MHz*km)<br />
2000 4700 NA NA<br />
1G Fibre Channel 100-<br />
MX-SN-I (1062 Mbaud)<br />
1G Fibre Channel 100-<br />
SM-LC-L<br />
2G Fibre Channel 200-<br />
MX-SN-I (2125 Mbaud)<br />
2G Fibre Channel 200-<br />
SM-LC-L<br />
4G Fibre Channel 400-<br />
MX-SN-I (4250 Mbaud)<br />
4G Fibre Channel 400-<br />
SM-LC-M<br />
4G Fibre Channel 400-<br />
SM-LC-L<br />
8G Fibre Channel 800-<br />
M5-SN-I<br />
8G Fibre Channel 800-<br />
SM-LC-I<br />
8G Fibre Channel 800-<br />
SM-LC-L<br />
(4250 Mbaud)<br />
10G Fibre Channel<br />
1200-MX-SN-I<br />
(10512 Mbaud)<br />
10G Fibre Channel<br />
1200-SM-LL-L<br />
16G Fibre Channel<br />
1600-MX-SN<br />
(10512 Mbaud)<br />
16G Fibre Channel<br />
1600-SM-LC-L<br />
16G Fibre Channel<br />
1600-SM-LC-I<br />
Source: thefoa.org<br />
300m<br />
3dB<br />
150m<br />
2.1dB<br />
70m<br />
1.8dB<br />
33m<br />
2.4dB<br />
500m<br />
3.9dB<br />
300m<br />
2.6dB<br />
150m<br />
2.1dB<br />
50m<br />
1.7dB<br />
82m<br />
2.2dB<br />
35m<br />
1.6dB<br />
860m<br />
4.6dB<br />
500m<br />
3.3dB<br />
380m<br />
2.9dB<br />
150m<br />
2.0dB<br />
300m<br />
2.6dB<br />
100m<br />
1.9dB<br />
860m<br />
4.6dB<br />
500m<br />
3.3dB<br />
400m<br />
3.0dB<br />
190m<br />
2.2dB<br />
300m<br />
2.6dB<br />
125m<br />
1.9dB<br />
10km<br />
7.8dB<br />
10km<br />
7.8dB<br />
4km<br />
4.8dB<br />
10km<br />
7.8dB<br />
1.4km<br />
2.6dB<br />
10km<br />
6.4dB<br />
1km<br />
6.0dB<br />
10km<br />
6.4dB<br />
2km<br />
2.6dB<br />
In summary, Fibre Channel-based storage solutions offer the following advantages:<br />
• Very broad support is provided by hardware and software manufacturers.<br />
• At this point, the technology has reached a high level of maturity.<br />
• The system is very high-performance.<br />
• High-availability solutions can be set up. (redundancy)<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 103 of 156
www.datacenter.rdm.com<br />
3.8.4. Fibre Channel over Ethernet (FCoE)<br />
FCoE is a protocol for transmitting Fibre Channel frames in networks based on full-duplex Ethernet. The essential<br />
goal in introducing FCoE is to promote I/O consolidation that is based on Ethernet (IEEE 802.3), with an eye to<br />
reducing the physical complexity of network structures, especially in data centers.<br />
FCoE makes it possible to use one standard physical infrastructure for both the transmission of Fibre Channel as<br />
well as conventional Ethernet. The essential advantages of this technology include scalability and the high<br />
bandwidths of Ethernet-based network, commonly 10 Gbit/s at the present time.<br />
On the other hand, the use of Ethernet for the transport of Fibre Channel frames also brings to bear the disadvantages<br />
of the classic Ethernet protocol, such as frame loss in overload situations. Therefore some improvements<br />
to the standard are necessary in order to ensure reliable Ethernet-based transmission. These enhancements<br />
are being driven forward with <strong>Data</strong> <strong>Center</strong> Bridging (see section 3.8.7).<br />
A further definite advantage can be seen in the current virtualization strategy that prevails for many data center<br />
providers, since FCoE in the end also represents, in practice, a form of virtualization technology that is based on<br />
physical media, and as such can partially be applied in host systems for virtualized servers.<br />
Consolidation strategies of this nature can therefore …<br />
… reduce expenditures and costs for a physical infrastructure that consists of network elements and cables.<br />
… reduce the number and total costs of the network interface cards required in terminal devices like servers.<br />
… reduce operating costs (power supply and heat dissipation).<br />
Converged 10 GbE<br />
Converged 10 GbE is a standard for networks which merge 10 Gbit/s Ethernet and 10 Gbit/s Fibre Channel. Fibre<br />
Channel over Ethernet (FCoE) also falls under this convergence approach. The FC packets are encapsulated in<br />
the header of the Ethernet frame, which then allows the Converged Ethernet topology to be used.<br />
If switches are provided with FCoE<br />
wherever possible (because of the<br />
different packet sizes), they are<br />
then transparent for FC and iSCSI<br />
storage and for LAN.<br />
FC encapsulation<br />
Fibre Fiber Channel payload<br />
FC<br />
header<br />
header<br />
FC<br />
header<br />
FCoE<br />
FCoE<br />
header<br />
header<br />
Ethernet<br />
header<br />
Converged Network Adapter<br />
(CNA)<br />
Ethernet data:<br />
1 / 10 Gbps<br />
Server<br />
10GBase-T: <strong>Data</strong> & FCoE<br />
FC Storage:<br />
1 / 2 / 4 / 8 Gbps<br />
Converged Network Switch<br />
FC<br />
10GBase-T<br />
A typical Fibre Channel data frame has a payload of 2,112 bytes (data), a header, and a CRC field (checksum<br />
field). A conventional Ethernet frame has a maximum size of 1,518 bytes. In order to achieve good performance,<br />
one must avoid having to break an FCoE frame into two Ethernet frames and then put them back together. One<br />
must therefore use a jumbo frame or at least a “baby jumbo”, that is not (yet) standardized, but offered by various<br />
manufacturers anyway.<br />
The 3-stage migration path from Fibre Channel to Fibre Channel over Ethernet per the Fibre Channel Industry<br />
Association (FCIA) was already shown in section 3.7.2.<br />
An essential requirement of the FCoE-supported Ethernet environment is that it be “lossless”, which means that<br />
no packets may be thrown away. Manufacturers implement “lossless” Ethernet by registering a possible buffer<br />
overflow and stopping the transmission afterwards. This is because in general, Ethernet frames are lost only in<br />
case of a buffer overflow.<br />
Page 104 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.8.5. iSCSI & InfiniBand<br />
Another driver of the continued development<br />
of Ethernet is the connection of<br />
storage resources to the server landscape<br />
via iSCSI.<br />
Overview of Ethernet Development<br />
iSCSI Internet Small Computer System Interface<br />
At the beginning, iSCSI supported speeds<br />
of 1 Gbit/s for this purpose, which previously<br />
was no real competition in a Fibre Channel<br />
technology field that generally used SAN.<br />
This is because their storage networks<br />
allowed for connections with 4 and even 16<br />
Gbit/s today. However, changes began<br />
with the introduction of 10 gigabit Ethernet<br />
(10 GbE), even in this area. (graphic)<br />
Time scale with introduction timeframe of the technologies<br />
This is because network managers are now increasingly toying with the idea of using Ethernet not just for their<br />
LAN networks, but also using the same physical network to integrate their SAN traffic. Such a change not only<br />
lowers the number of physical connections in the network, but also reduces general network complexity as well as<br />
associated costs. This is because a separate, costly Fibre Channel network becomes unnecessary through the<br />
use of iSCSI over Ethernet as an alternative to FCoE.<br />
iSCSI is a method which enables the use of the SCSI protocol over TCP/IP. As in normal SCSI, there is a<br />
controller (initiator) which directs communication. Storage devices (hard disks, tape drives, optical drives, etc.) are<br />
called targets.<br />
Each iSCSI node possesses a 255-byte-long name as well as an alias, and both are independent of its IP address.<br />
In this way, a storage array can be found even it is moved to another network subsegment.<br />
The use of iSCSI enables access to the storage network<br />
through a virtual point-to-point connection without separate<br />
storage devices having to be installed. Existing network<br />
components (switches) can be used since no special new<br />
hardware is required for the node connections, as is the<br />
case with Fibre Channel. Access to hard disks is on a block<br />
basis and is therefore also suitable for databases, and the<br />
access is transparent as well, appearing at the application<br />
level as an access to a local hard disk.<br />
Its great advantage over a classic network is its high level of<br />
security, since iSCSI attaches great importance to flawless<br />
authentication and the security of iSCSI packets, which are<br />
transported over the network encrypted. The performance<br />
that is possible with this technology definitely lies below that<br />
of a SCSI system that exists locally, due to the higher<br />
latencies in the network. However, with bandwidths currently<br />
starting at 1 Gbit/s or 125 MB/s, a large volume of data can<br />
be stored.<br />
InfiniBand<br />
Since InfiniBand is scalable and supports quality of service (QoS) as well as failover (redundancy function), this<br />
technology was originally used for the connection between HPC servers. From that time, servers and storage<br />
systems have also been connected via InfiniBand, to speed up the data transfer process.<br />
InfiniBand is designed as a switched I/O system and, via the I/O switch, connects separate output units, processor<br />
nodes and I/O platforms like mass storage devices at a high data rate. The connection is carried out over a<br />
switching fabric. The point-to-point connection operates in full-duplex mode.<br />
<strong>Data</strong> transmission in InfiniBand is packet-oriented, and data packets have a length of 4,096 bytes. In addition to its<br />
payload, every data packet has a header with addresses and error correction. Up to 64,000 addressable devices<br />
are supported.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 105 of 156
www.datacenter.rdm.com<br />
The InfiniBand concept is aware of four network components: The Host Channel Adapter (HCA), the Target<br />
Channel Adapter (TCA), the switch and the router. Every component has one or more ports and can be connected<br />
to another component using a speed class (1x, 4x, 8x, 12x).<br />
IB1X plug for 1xInfiniBand / IB12X plug for 12xInfiniBand<br />
Sierra Technologies<br />
So, for example, an HCA can be connected with one<br />
or more switches, which for their part connect<br />
input/output devices via a TCA with the switch. As a<br />
result, the Host Channel Adapter can communicate<br />
with one or more Target Channel Adapters via the<br />
switch. Multi-point connections can be set up<br />
through the switch. The InfiniBand router’s method<br />
of operation is essentially comparable to that of a<br />
switch; however the router can transfer data from<br />
local subnets to other subnets.<br />
InfiniBand will normally transmit over copper cables, as they are also used for 10 gigabit Ethernet. Transmission<br />
paths of up to 15 meters are possible.<br />
If longer paths need to be bridged over, fiber-optic media converters can be accessed, which convert the<br />
InfiniBand channels on individual fiber pairs. Optical ribbon cables with MPO plugs are used for this purpose, in<br />
other words the same plug type as in 40/100 gigabit Ethernet.<br />
In the early years, the transmission rate was 2.5 Gbit/s, which resulted in a usable transmission rate of 250 MB/s<br />
per link in both directions, in the case of 8B/10B coding. In addition to this standard data rate (1x), InfiniBand<br />
defines bundling of 4, 8 and 12 links for higher transmission rates. The resulting transmission speeds lie at 10<br />
Gbit/s, or a 1 GB/s usable data rate, and 30 Gbit/s or 3 GB/s. The range for a connection with glass fibers (single<br />
mode) can come to 10 kilometers.<br />
Newer InfiniBand developments have a ten time higher data rate of 25 Gbit/s and reach data rates at levels 1x, 4x,<br />
8x and 12x between 25 Gbit/s and 300 Gbit/s or 25 GB/s.<br />
3.8.6. Protocols for Redundant Paths<br />
If Fibre Channel over Ethernet is integrated into the data center or voice and<br />
video into LAN, then chances are good that the network will have to undergo<br />
a redesign. This is because current network designs build on the provision of<br />
substantial overcapacities.<br />
The aggregation/distribution and core layer are often laid out as a tree<br />
structure in a multiple redundant manner. In this case, the redundancy is only<br />
used in case of a fault. As a result, large bandwidth resources are squandered<br />
and available redundant paths are used only insufficiently or inefficiently,<br />
if at all.<br />
However, short paths and thus lower delay times are the top priority for new<br />
real-time applications in networks. So-called "shortest path bridging" will implement<br />
far-reaching changes in this area.<br />
The most important protocols developed for network redundancy are listed<br />
below. Each of these protocols has its own specific characteristics.<br />
Spanning Tree<br />
The best-known and oldest protocol, that prevents “loops” in star-shaped structured Ethernet networks and therefore<br />
allows redundant paths, is the Spanning Tree Protocol (STP). The method was standardized in IEEE 802.1D<br />
and came into use in the world of layer 2 switching.<br />
The STP method works as follows:<br />
• If redundant network paths exist, only one path is active. The others are passive – or actually they are<br />
“turned off” (for loop prevention).<br />
• The switches determine independently which paths are active and which are passive (negotiation).<br />
Manual configuration is not required.<br />
• If an active network path fails, a recalculation is performed for all network paths and the required<br />
redundant connection is built.<br />
Page 106 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Layer 2 switching with Spanning Tree has two essential disadvantages:<br />
• Half of the network paths that are available are not used – there is therefore no load distribution.<br />
• The calculation of network paths in case of a failure can take a full 30 seconds for a network that is large<br />
and very complex. The network functions in a very restricted manner during this time.<br />
The Rapid Spanning Tree protocol (RSTP) was designed as a solution for this second problem, and has also<br />
been standardized for quite some time. Its basic idea lies in not stopping communication completely when a<br />
network path fails, but continuing to work with the old configuration until the re-calculation is completed – and<br />
therefore a few connections do not work. It is true this solution does not resolve the problem immediately.<br />
However, in case of a more “minor” failure, such as a path failing in the “edge” of the network, the network does<br />
not immediately come to a total standstill and downtime in this case amounts to only 1 second or so.<br />
Since after more 25 years the Spanning Tree process is pushing its limits, the IETF (Internet Engineering Task<br />
Force) wants to replace it with the Trill Protocol (Transparent Interconnection of Lots of links) which internalizes<br />
STP.<br />
The more elegant method is Layer 3 Switching (IP address) which, strictly speaking, represents more routing<br />
than a classic switching. Routing protocols such as RIP and OSPF come into use in the layer 3 environment,<br />
which is why both the network paths that are switched off as well as the relatively extensive re-calculations do not<br />
apply when an active path fails.<br />
Layer 3 switching requires extremely careful planning of IP address ranges, since a separate segment must be<br />
implemented for each switch. In practice, one frequently finds some combination which uses layer 3 switching in<br />
the core area and or layer 2 switching in the aggregation/access area or floor area.<br />
Planning a high-performance and fail-safe network within a building, building complex or campus is already a<br />
costly procedure. It is even a more complex process to construct a network that comprises multiple locations<br />
within a city (MAN Metropolitan Area Network).<br />
In contrast to a WAN implementation, all locations in a MAN are connected at a very high speed. The core of a<br />
MAN is a ring, based on gigabit Ethernet for example. The ring is formed through point-to-point connections<br />
between the switches at separate locations. A suitable method must ensure that an alternate path is selected in<br />
case a connection between two locations fails. In general, this is realized through layer 3 switching. The primary<br />
location, which is generally also the server location, is equipped with redundant core switches which are all<br />
connected to the ring.<br />
Link Aggregation<br />
Link Aggregation is another IEEE standard (802.3ad, 802.1ax since 2008) and designates a method for bundling<br />
multiple physical LAN interfaces into one logical channel. This technology was originally used exclusively to<br />
increase the data throughput between two Ethernet switches. However, current implementations can also connect<br />
servers and other systems via link aggregation, e.g. in the data center.<br />
Since incompatibilities and technical differences exist when aggregating Ethernet interfaces, the following terms,<br />
based on manufacturer, are used as synonyms.<br />
• Bonding, in the Linux environment<br />
• Etherchannel, for Cisco<br />
• Load balancing, for Hewlett-Packard<br />
• Trunking, for 3Com and Sun Microsystems<br />
• Teaming, for Novell Netware<br />
• Bundling, as the German term<br />
Link Aggregation is only intended for Ethernet and<br />
assumes that all ports that are to be aggregated have<br />
identical speeds and support full-duplex. The possible<br />
number of “aggregation” ports is different and based<br />
on manufacturer implementation.<br />
The method only makes sense when interfaces that are as fast as possible are bundled. Four times Fast Ethernet<br />
100 Mbit/s (full-duplex) corresponds to a maximum 800 Mbit/s, on the other hand four times gigabit Ethernet gives<br />
8 Gbit/s.<br />
Another advantage is increased failure safety. If a connection fails, the logical channel as well as a transmission<br />
path continue to exist, even if it is at a reduced data throughput.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 107 of 156
www.datacenter.rdm.com<br />
Shortest Path Bridging<br />
In Shortest Path Bridging (IEEE 802.1aq) the Shortest Path Tree (SPT) is calculated for each individual node,<br />
using the Link State Protocol (LSP). The LSP protocol is used to detect and show the shortest connection<br />
structures for all bridges/switches in the SPB area. Depending on the SPB concept, MAC addresses for<br />
participating stations and information on interface service affiliations are distributed to those stations that are not<br />
participating. This SPB method is called SPBM, which stands for SPB-MAC. The topology data helps the<br />
calculation computer to determine the most economical connections from each individual node to all other<br />
participating nodes. That way each switch or router always has an overall view of all paths available between the<br />
nodes.<br />
In case of a change in topology, for example a switch is added or a link deactivated or activated, all switches must<br />
re-learn the entire topology. This process may sometimes last several minutes and depends upon the number of<br />
switches and links participating in the switchover process.<br />
Nevertheless, Shortest-Path Bridging offers a number of advantages:<br />
• It is possible to construct networks in the Ethernet that are truly meshed.<br />
• Network loads are distributed equally over the entire network topology.<br />
• Redundant connections are no longer deactivated automatically, but used to actively transport the data<br />
that is to be transmitted.<br />
• The network design becomes flatter and performance between network nodes increases.<br />
Dynamic negotiation must be mentioned as one disadvantage of Shortest Path Bridging. Transmission paths in<br />
this topology change very quickly, which makes traffic management and troubleshooting more difficult.<br />
The first products to support the new “trill” and “SPB” switch mechanisms are expected by the middle of 2011. It<br />
still remains to be seen whether present switches can be upgraded to these standards. Network design will<br />
change drastically as a result of Shortest Path Bridging. Therefore one should prepare intensively for future<br />
requirements and start to lay the groundwork for a modern LAN early on.<br />
3.8.7. <strong>Data</strong> <strong>Center</strong> Bridging<br />
The trend in the data center is moving toward convergent networks. Here, different data streams like TCP/IP,<br />
FCoE and iSCSI are transported over Ethernet. However, Ethernet is not especially suited for loss-free transmission.<br />
This is why <strong>Data</strong> <strong>Center</strong> Bridging (DCB) is being developed by the IEEE. Cisco uses the term “<strong>Data</strong><br />
<strong>Center</strong> Ethernet” for this technology. IBM uses the name “Converged Enhanced Ethernet”.<br />
Since DCB can only be equipped with specialized hardware, upgrading switches that have been used up to this<br />
point is not an easy matter. One must therefore make sure that DCB is supported when purchasing switches, for<br />
purposes of investment security.<br />
Four different standards are behind <strong>Data</strong> <strong>Center</strong> Bridging, for the purpose of drastically increasing network controllability,<br />
reliability and responsiveness:<br />
• Priority-based Flow Control (PFC, 802.1Qbb)<br />
• Enhanced Transmission Selection (ETS, 802.1Qaz)D<br />
• CB Exchange Protocol (DCBX, 802.1Qaz)<br />
• Congestion Notification (CN, 802.1Qau)<br />
All four elements are independent of one another. However, they only achieve their full effect when working<br />
together:<br />
• PCF enhances the current Ethernet stop mechanism. The current mechanism stops the entire<br />
transmission, PCF only one channel. There are then eight independent channels for this one channel.<br />
They can be stopped and restarted.<br />
• DCBX requires switches, to exchange configurations with one another.<br />
• ETS is used to define different priority classes within a PCF channel and to process it in different ways.<br />
• CN stands for a type of traffic management that restricts the flow of traffic (rate limiting). A notification<br />
service causes the source to lower its transmission speed. (optional)<br />
DCB allows multiple protocols with different requirements to be handled over layer 2 in a 10 gigabit Ethernet infrastructure.<br />
This method can therefore also be used in FCoE environments.<br />
Up to this point, there are only three manufacturers who are known to support DCB: Cisco, Brocade and Blade<br />
Network Technologies.<br />
Page 108 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.9 Transmission Media<br />
Although the foundation for the standardization of structured communication<br />
cabling was laid in the 1990s and standards have been continuously deve-loped<br />
to data, the data center and server room area have been neglected during this<br />
entire period. The first ANSI (American National Standards Institute) standard for<br />
data centers, the TIA-942, was published only in 2005, whereupon international<br />
and European standards followed.<br />
The result of this neglect becomes obvious when entering many data centers;<br />
one finds a passive IT infrastructure that is implemented without a solid, needsoriented<br />
design and has become increasingly chaotic over the years. This<br />
chaos is further intensified – in spite of the introduction of VM solutions – by the<br />
increasing number of servers with may have very different requirements for<br />
transmission technology or the type of LAN interface supported.<br />
The time has therefore come to rethink the technologies practiced up to this point as well as the alternative<br />
options available, in both the realization of new data centers as well as redesign of existing ones. The following<br />
section discusses these alternatives, and in the process attempts to describe methods that remain practically<br />
relevant and realistic.<br />
Definition and Specification<br />
Any sensible planning of technical measures must always be preceded by an analysis of requirements. The first<br />
step to that end is to recognize or identify the components that will be subject to these requirements. Obviously,<br />
cabling involves components which make it possible to connect a server to an LAN connection.<br />
In retrospect, one can say that the structured communication cabling approach in accordance with EN 50173 (or<br />
ISO/IEC 11801 or EIA/TIA 568) was successful. This is because this standardization approach ended up introducing<br />
a cabling infrastructure that can be scaled according to needs, and permits a variety of quality grades. It<br />
was only as a result of the specifications in these standards that a stability and security was guaranteed to the<br />
planners and users which led to a high service life of current cabling systems.<br />
Put more simply, the secret to success of the standards implemented in this area consists of two principles, the<br />
specification of cabling components, and the recommendation (not regulation!) for linking these components<br />
together in specific topology variants. A fundamental strength of the favored star-shape topology results from its<br />
scalability, since through the use of different materials every layer of the hierarchy can be scaled irrespective of<br />
the other layers. New supplementary standards that are based on this proven star structure were therefore<br />
developed for data center cabling systems. They take advantage of this strength and are adapted to these<br />
environmental requirements as already described in section 3.1.<br />
The following improvements, among others, are achieved by using standards-compliant cabling systems:<br />
• Distributors are laid out and positioned better, in a way that minimizes patch and device connection cable<br />
lengths and provides optimal redundancies in the cabling system.<br />
• Work areas that are susceptible to faults are limited to the patching area in the network cabinet and in the<br />
server cabinet. If the cables used there are damaged, the cables in the raised floor do not need to be<br />
replaced as well.<br />
• The selection of the modules in the copper patch panel as well as the network cabinet and server cabinet<br />
can be made independently of the usual requirement for RJ45 compatibility. A connection cable can be<br />
used to adapt the plug to any connecting jack for active network components or network interface cards.<br />
A similar level of freedom is also provided by glass fibers. In this case, connectors of the highest<br />
technical quality are to be used in patch panels.<br />
• Inaccessible areas can be continuously documented and labeled.<br />
Now that the reasons for implementing a structured cabling system in a data center have been determined as<br />
above, we will now specify the requirements for materials, especially requirements with regard to data communication.<br />
An fierce debate currently prevails over the question of whether glass fiber or copper should be the first choice for<br />
media in a data center. In the opinion of the author, there is no solid reason for recommending one of these media<br />
types over the other in this area, nor is one strictly necessary in terms of a structured cabling system. However, it<br />
is certainly worthwhile to examine the advantages and disadvantages of both technologies in light of their use in<br />
data centers.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 109 of 156
www.datacenter.rdm.com<br />
3.9.1 Coax and Twinax Cables<br />
The importance of using asymmetric cable media like coaxial and Twinax cables is being increasingly restricted to<br />
areas outside of the data center network infrastructure. This is because symmetric cables, or so-called twisted pair<br />
cables, are prevailing the copper area of the structured communication cabling system.<br />
In data centers, coax cables (also called coaxial cables) are therefore found predominantly in cabling for video<br />
surveillance systems and analog KVM switches.<br />
Copper cables are often used in mass storage systems, since the connection paths between individual Fibre<br />
Channel devices in these systems are generally not very long. A total of three different copper cable types (Video<br />
Coax, Miniature Coax and Shielded Twisted Pair) are defined for FC with FC-DB9 connectors. Paths of up to 25<br />
meters between individual devices can be realized by using these cables. In terms of copper transmission media,<br />
Fibre Channel differentiates between intra-cabinet and inter-cabinet cables. Intra-cabinet cables are suitable for<br />
short housing connections of up to 13 meters. These cables are also known as passive copper cables. Intercabinet<br />
cables are suitable for connections of up to 30 meters between two FC ports, with transmission rates of<br />
1.0625 Gbit/s. These cables are also known as active copper cables. The glass fiber media supported by FC are<br />
listed in the table on page 103, along with their corresponding length restrictions.<br />
IEEE 802.3ak dates from 2004 and was the first Ethernet standard for 10 Gbit/s connections over copper cable.<br />
IEEE 802.3ak defines connections in accordance with 10GBase-CX4. The standard allows for transmissions at a<br />
max. rate of 10 Gbit/s by means of Twinax cabling, over a maximum length of 15 meters. However, this speed can<br />
only be guaranteed under optimal conditions (connectors, cables, etc.) 10GBase-CX4 uses a XAUI 4-lane<br />
interface for connection, a copper cabling system which is also used for CX4-compatible InfiniBand (see 3.8.5). A<br />
maximum connection of 10 meters is supported in 40 and 100 gigabit Ethernet (40GBase-CR4 and 100GBase-<br />
CR10) using Twinax cables in 4- or 10-pair respectively (double Twinaxial IB4X- or IB10X cables respectively).<br />
The examples above are still the best-known areas of application in data centers of asymmetric cables. As already<br />
mentioned, future-oriented copper cable systems for network infrastructures consist of symmetric cables, or twisted<br />
copper cables.<br />
3.9.2 Twisted Copper Cables (Twisted Pair)<br />
In designing a data cabling system in data centers that is geared for future<br />
use, there really is no alternative to using twisted pair, shielded cables<br />
and connectors.<br />
This is not a big problem for cabling systems in the German-speaking<br />
world, since this is already a de facto standard for most installations. The<br />
possibility also exists to choose your required bandwidth from the large<br />
selection of shielded cables and connectors (note: Category 6 or 6A are<br />
also available in an unshielded version).<br />
Cabling components based on copper (cables and connecting elements) can therefore be differentiated as follows:<br />
• Category 6 (specified up to a bandwidth of 250 MHz)<br />
• Category 6A / 6 A (specified up to a bandwidth of 500 MHz)<br />
• Category 7 (specified up to a bandwidth of 600 MHz)<br />
• Category 7 A (specified up to a bandwidth of 1,000 MHz)<br />
As already mentioned in section 3.1.2 on page 47, ISO/IEC 24764 references ISO 11801 in the area of performance;<br />
the current version of that standard was developed as follows:<br />
• With its IEEE 802.3an standard from 2006, the IEEE not only published the transmission protocol for<br />
10 gigabit Ethernet (10GBase-T), but also defined the minimum standards for channels to be able to<br />
transmit 10 gigabit up to 100 meters over twisted copper lines.<br />
• EIA/TIA followed with its 568B.2-10 standard in 2008 which set higher minimum standards for channels,<br />
and also specified requirements for components: Category 6A was born.<br />
• In the same year, the ISO/IEC 11801 published Appendix 1, which formulated even stricter requirements<br />
for channels: channel class E A was defined. The definition of components remained open.<br />
• This gap was closed in 2010 with publication of ISO/IEC 11801, Appendix 2. This new standard defines<br />
components, i.e. Category 6 A cables and plug connections – note the subscript A – in the process<br />
setting standards which surpass those of EIA/TIA.<br />
Page 110 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Frequency IEEE EIA/TIA ISO/IEC<br />
Channel, etc. Channel Components Channel Components<br />
1-250 MHz 1GBase-T Cat. 6 Cat. 6 Class E Cat. 6<br />
1-500 MHz<br />
10GBase-T<br />
IEEE 802.3an<br />
(2006)<br />
Cat. 6A<br />
EIA/TIA 568B.2-<br />
10 (2008)<br />
Cat. 6A<br />
EIA/TIA 568B.2-<br />
10 (2008)<br />
Class E A<br />
ISO/IEC 11801<br />
Appendix 1<br />
(2008)<br />
Cat. 6 A<br />
ISO/IEC 11801<br />
Appendix 2<br />
(2010)<br />
1-600 MHz Class F Cat. 7<br />
1-1000 MHz Class F A Cat. 7 A<br />
Current cabling standards for typical data center requirements<br />
Channel standards in accordance with<br />
EIA/TIA Cat. 6A show a moderate drop of<br />
27 dB in the NEXT curve starting from 330<br />
MHz, while a straight line is defined for the<br />
channel in accordance with ISO/IEC Class<br />
E A.<br />
A design that is in accordance with ISO/IEC<br />
therefore provides for the best possible<br />
transmission performance with maximum<br />
availability in twisted pair copper cabling<br />
based on RJ45 technology. At 500 MHz<br />
the NEXT performance required for Class<br />
E A must be 1.8 dB better than for a Cat. 6A<br />
channel. In practice, this higher demand<br />
leads to higher network reliability, and in<br />
turn to fewer transmission errors.<br />
Comparing IEEE 802.3an, TIA and ISO/IEC: NEXT limit values for channels<br />
This also lays the basis for a longer service and overall life of the cabling infrastructure.<br />
Cable Selection<br />
It is generally known that maximum bandwidth usually also allows a maximum data rate. If cable prices are equal<br />
in terms of length, preference should be given to the cable type of the highest quality (i.e. Category 7 A), unless this<br />
results in clear drawbacks, such as a large outer diameter. This is because this cable category is theoretically<br />
more difficult to install and leads to larger cable routing systems, both of which can increase costs. However, the<br />
experience with many tenders shows that providers very rarely demand an additional price for laying cables of a<br />
higher quality. In addition, the process for planning cable routing systems generally does not differ for Category<br />
7( A) and Category 6(A/ A) media. There is therefore no striking disadvantage when installing top quality cables in<br />
the data center.<br />
In terms of shielding, one must take into account that by using increasingly higher modulation levels, the required<br />
bandwith is certainly reduced, but therefore new protocols are becoming increasingly susceptible to external<br />
electromagnetic influences. The symbol distance of 10 gigabit Ethernet is approximately 100 times smaller than<br />
that of 1 gigabit Ethernet. Nevertheless, unshielded cabling systems can be used for 10GBase-T if appropriate<br />
basic and environmental conditions are satisfied. This is because UTP cabling systems require additional<br />
protective measures to support 10GBase-T, such as:<br />
• Careful separation of data cables and power supply cables or other potential sources of interference<br />
(minimum distance of 30 cm between data and power cables)<br />
• Use of a metallic cable routing system for data cables<br />
• Prevention of the use of wireless communication devices in the vicinity of the cabling system<br />
• Prevention of electrostatic discharges through protective measures common in electronics manufacturing,<br />
like conductor stations, ESD arm bands, anti-static floors, etc.<br />
The effects and costs of additional protective measures as well as operational constraints must also be taken into<br />
consideration when deciding between shielded and unshielded cabling for 10GBase-T.<br />
The EMC (electromagnetic compatibility) behavior of shielded and unshielded cabling systems for 10GBase-T is<br />
described in detail in section 3.10.6, by means of an independent examination.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 111 of 156
www.datacenter.rdm.com<br />
Since old cable terms related to shielding were not standard, were inconsistent, and often led to confusion, a new<br />
naming system in the form XX/YZZ was introduced with ISO/IEC 11801 (2002).<br />
• XX stands for overall shielding<br />
o U = no shielding (unshielded)<br />
o F = foil shield<br />
o S = braided shield<br />
o SF = braid and foil shield<br />
• Y stands for pair shielding<br />
o U = no shielding (unshielded)<br />
o F = foil shield<br />
o S = braided shield<br />
• ZZ always stands for TP = twisted pair<br />
Twisted pair cables in the following shield variants are available on the market:<br />
Shielding U/UTP F/UTP U/FTP S/FTP F/FTP SF/FTP<br />
Overall shield<br />
Foil () <br />
Wire mesh () <br />
Core pair shield Foil <br />
The introduction of thin, lightweight AWG26 cables (with a wire diameter of 0.405 mm, as compared to 0.644 mm<br />
for AWG22) as well as enormous advances in shielding and connection technology is making massive savings in<br />
cabling costs possible, and at the same time increasing the performance and efficiency of passive infrastructures.<br />
Up to 30 percent savings in cabling volume and weight is possible using these cables. However, some details<br />
come into play with regard to planning, product selection and installation for achieving sufficient attenuation<br />
reserves in the channel and permanent link, and in turn full operational reliability of the cabling system.<br />
AWG stands for American Wire Gauge and is a wire diameter coding system used to characterize stranded<br />
electrical lines as well as solid wires, and is primarily used in the field of electrical engineering. The following core<br />
diameters are used for communication cabling:<br />
• AWG22 / Ø 0.644 mm • AWG23 / Ø 0.573 mm • AWG24 / Ø 0.511 mm • AWG26 / Ø 0.405 mm<br />
In addition to the actually transmitting data, a structured cabling system with twisted copper cables allows terminal<br />
devices that are frequently used to be supplied with power from a remote location. Such a system based on the<br />
IEEE 802.3af and IEEE 802.3at standards, better known by the names “Power over Ethernet” (PoE) and “Power<br />
over Ethernet Plus” (PoEplus), can use data cables to supply terminal devices such as wireless access points,<br />
VoIP telephones and IP cameras with the electrical energy they require. You can find detailed information on this<br />
process in section 3.10.3.<br />
Pre-Assembled Systems / Plug-and-Play Solutions<br />
There are manufacturing concepts that will provide decided advantages when it is expected that a data center will<br />
undergo dynamic changes. All installations described up to this point (copper and glass fiber cabling) require that<br />
an installation cable be laid and be provided with plugs or jacks at both ends. The termination process requires a<br />
high level of craftsmanship. Quick, spontaneous changes to the positions of connections in a data center are only<br />
possible under certain conditions, and are therefore expensive. Most cabling system manufacturers provide<br />
alternatives in the form of systems made up of the following individual components:<br />
• Trunk cables consisting of multiple twisted-pair cables or multi-fiber cables, both ends terminated with<br />
normal plug connectors or jacks or even a special unique manufacturer-specific connector.<br />
Manufacturers also provide a measurement report for all connections, especially for glass fiber systems.<br />
• Modular terminal blocks which can be mounted in the cabinet using 19” technology or even in special<br />
double floor systems. Trunk cables can be connected to this block at the input side using the special<br />
connector, and then completely normal RJ45 jacks or even glass fiber couplers are available on the output<br />
side.<br />
At this point we should again mention that the availability of data center applications can be increased by using<br />
cabling systems pre-assembled at the factory. This is because these systems minimize the time spent by<br />
installation personnel (third-party personnel) in the security area of the data center during both initial installation of<br />
the cabling system as well as during any expansions or changes, which translates into an additional plus in terms<br />
of operational reliability.<br />
Page 112 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
In addition, the advantage of factory preassembled<br />
cabling systems comes up<br />
during installation in the form of time<br />
savings, which also saves costs. In case<br />
capacity must be expanded because of an<br />
increase in IT devices, these cabling<br />
systems and in turn the actual IT application<br />
can be linked to one another and put into<br />
operation easily and as quickly as possible<br />
by internal operational personnel.<br />
These ready-to-operate systems for so-called<br />
“plug-and-play installations” are often freely<br />
configurable as well, and have reproducible<br />
quality that is of the highest quality and<br />
therefore excellent transmission properties.<br />
Pre-assembled shielded cable fitted with 6 A modules from R&M<br />
In addition, no acceptance test is required after these cabling systems are installed, as long as the cables and<br />
connection elements were not damaged during the installation. Furthermore, these solutions provide high investment<br />
protection, since they can be uninstalled and reused.<br />
Trunk cable fitted with MPO/MTP ® connectors from R&M<br />
The increase in density of glass fiber cabling in data centers has led<br />
to parallel optic connection technology, and in turn to innovative<br />
multi-fiber connectors that are as compact as an RJ45 plug<br />
connection. Multi Fiber Push-On (MPO) technology has proven itself<br />
to be a practicable solution in this area. This is because four or even<br />
ten times the number of plug connections must now be provided to<br />
ports for 40/100 gigabit Ethernet. This high number of connectors<br />
can no longer be managed with conventional single connectors.<br />
This is why the multi-fiber MPO connector, which can contact 12 or<br />
24 fibers in a minimal amount of space, was included in the IEEE<br />
802.3ba standard for 40GBase-SR4 and 100 GBase-SR10. It is<br />
therefore clear that an MPO connector with 12, 24 or even up to 72<br />
fibers can no longer be assembled on site. The use of the MPO<br />
connectors therefore always also implies the use of trunk cables<br />
which are pre-assembled and delivered in the desired length.<br />
Please find more detailed information on parallel optic connection technology using MPO in section 3.10.1.<br />
3.9.3 Plug Connectors for Twisted Copper Cables<br />
The RJ45 connection system has been being developed<br />
over decades, from a simple telephone plug to a highperformance<br />
data connector. The RJ45 plug and jack<br />
interface was defined by ISO/IEC JTC1/SC25/WG3 as<br />
the international ISO/IEC 11801 standard, and is considered<br />
a component in universal cabling systems. It is<br />
used as an eight-pole miniature plug system in connection<br />
with shielded and unshielded twisted copper cables.<br />
The RJ45 is the most widely distributed of all data<br />
connectors, and is used in both the area for connecting<br />
terminal devices like PCs, printers, switches, routers,<br />
etc. – in the outlets and on connection/patch cables – as<br />
well as in distribution frames for laying out and distributing<br />
horizontal cabling.<br />
The transmission properties of the RJ45 connection system are essentially determined through its small dimensions.<br />
Parameters like frequency range, attenuation, crosstalk and ACR are significant in determining transmission<br />
behavior. Crosstalk is especially critical, since pairs which are guided separately in the cable must be laid over<br />
one another in the connection system (1-2, 3-6, 4-5, 7-8).<br />
Connector characteristics must be adapted to those of the cable within the structured cabling system. In order to<br />
compare and adjust these characteristics, connectors, like data cables, connection cables, patch cords and other<br />
cabling components, are defined by the categories already mentioned.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 113 of 156
www.datacenter.rdm.com<br />
The closeness of the contacts, as well as the wire routing mentioned above, have proven to be problematic for<br />
high-frequency behavior. Only connection modules that were optimized with respect to contact geometry satisfy<br />
standards up to categories 6A and 6 A.<br />
As in channel technology (see page 111),<br />
higher performance can be achieved with a<br />
Cat. 6 A connector designed in accordance<br />
with ISO specifications than with a Cat. 6A<br />
plug connector in accordance with EIA/TIA<br />
specifications.<br />
A 40 dB slope in NEXT should be provided<br />
for Cat. 6A starting from 250 MHz, and a 30<br />
dB slope for Cat. 6 A, in accordance with the<br />
component standard. In the case of 500<br />
MHz, this means that a Cat. 6 A module<br />
must achieve NEXT performance that is 3<br />
dB better than a Cat. 6A module (see the<br />
graphic to the right).<br />
This is significant. To achieve this performance,<br />
new modules must be developed<br />
500<br />
from the ground up. The required NEXT<br />
performance is cannot be reached through<br />
a change in the existing design alone – as can often be seen in Cat. 6A modules available on the market. Above<br />
all, more compensation elements are required to balance out the additional coupling effects like cross-modal<br />
couplings. Greater effort is required to separate pairs from one another. The termination process must be<br />
performed in a very precise manner and guaranteed to be without error, to ensure consistent signal transmission.<br />
R&M has succeeded in doing just that with its Cat. 6 A module, even with enormous safety reserves over and<br />
above the strict international standard. This innovative solution can be shown using the following product example<br />
for Cat. 6 A.<br />
The requirements for the transmission performance for Cat. 7/7 A can only be achieved by assigning the pair pins<br />
(i.e. 1-2 and 7-8) to the outer corners of the jack, which restricts flexibility considerably. For this reason, the RJ45<br />
connector was not used to connect to Cat. 7 and 7 A cables. The GG45 connector, ARJ45 connector and TERA<br />
connector were developed and standardized for these categories. These connection systems have a different<br />
contact geometry, by means of which the center, or critical, pairs can be separated from one another.<br />
Selecting Connection Technology<br />
55.0<br />
50.0<br />
45.0<br />
40.0<br />
35.0<br />
30.0<br />
25.0<br />
20.0<br />
100<br />
ISO/IEC Cat. Cat. 6 6 A A vs. TIA Cat. 6A<br />
Connecting NEXT-Werte Values Hardware for für Plug Steckverbindungen<br />
NEXT Connections Values<br />
ISO/IEC<br />
TIA<br />
As compared to the “cable question” the “connector question” is more difficult to solve, if one consistently chooses<br />
to use cabling of class F or class F A, which, however, is not required for 10GBase-T (see table on page 111). This<br />
is because the only choice that remains when opting for Category 7 or 7 A connection systems is between the<br />
TERA TM connection system and the GG45- and ARJ45 connector types.<br />
TIA<br />
ISO/IEC<br />
TERA TM connection system, GG45 connection system, ARJ45 connection system,<br />
Siemon Company Nexans Stewart Connector<br />
The advantage of the GG45 jack lies in its compatibility with existing RJ45 connection cables, since all previous<br />
cables can be re-used, up to the first use of a link for Class F( A). However, tenders in which Category 6A/ A<br />
technology and GG45-based technology were compared show that the use of GG45 leads to about triple the price<br />
in overall connection technology (mind you: a factor of 3 for materials and assembly!) and this in turn is what turns<br />
off many clients.<br />
The additional cost of the TERA connection system is not so high, but on the other hand this system forces an<br />
immediate use of new connection and patching cables (TERA on RJ45). As in the case of GG45, many clients<br />
accept this restriction only reluctantly.<br />
Page 114 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
ARJ45 (augmented RJ) likewise satisfies Category 7 A standards and thus those of link class F A. Furthermore,<br />
ARJ45 connection systems can be used in addition to 10 gigabit Ethernet in other high-speed networks like<br />
InfiniBand or Fibre Channel (FC), or to replace Serial Attached SCSI (SAS) or PCI-Express (PCIe). ARJ45 is<br />
interoperable with the GG45 plug connector, which has the same areas of application. Even its form factor is<br />
identical to that of the RJ45 connector.<br />
However, the question remains open of whether one must use a Category 7 A cabling system at all, or whether a<br />
more cost-effective Category 6A/ A system, which is fully usable for 10 Gbit/s, would not be sufficient. This variant<br />
represents a sensible solution because of its more cost-effective connection technology.<br />
Category 6 A from R&M<br />
For all processes from product development to production, R&M’s primary goal is to guarantee that 100 percent of<br />
the modules it provides will always deliver the expected performance. This means that every single module must<br />
always achieve the desired performance everywhere, regardless of who is performing the termination.<br />
A pilot project carried out last year and using over 2,000 class E A permanent links of different lengths showed that<br />
all installers could achieve exceptionally good NEXT performance using this module (4 to 11 dB reserve).<br />
The Cat. 6 A module from R&M was designed from the bottom up to be<br />
easy to operate, yet provide maximum performance during operation – in<br />
other words, where it matters.<br />
Its innovative wiring technology with automatic cutting process guarantees<br />
that an absolutely consistent, precise termination of the wires, independent<br />
of the installer.<br />
Cat. 6 A module with X Separator from R&M<br />
The use of all four module sides and its unique pyramid shape maximizes<br />
the distance between the pairs in the module. The X Separator comes<br />
into use here and isolates pairs from one another to minimize the NEXT<br />
effect of the cable connection on performance.<br />
All these product features ensure that Category 6 A modules from R&M always achieve outstanding, consistent<br />
performance, with enormous safety reserves for signal transmission.<br />
Conclusion<br />
What should a data center planner or operator select after deciding to go with copper for cabling components<br />
The current viewpoint, for financial and technical reasons, is that a shielded Category 7 A cable and a shielded<br />
Category 6 A connection system is definitely the best choice.<br />
Security Solutions for Cabling Systems<br />
A quality cabling system can have a positive effect on data center security, and appropriate planning will allow it to<br />
be designed in such a way that it can be adapted to increasing requirements at any time. Clearness, clarity,<br />
simplicity and usability play a major role in this process. And in the end, the cost-benefit ratio should also be taken<br />
into consideration in product selection. Often a simple, yet consistent, mechanical solution is more effective than<br />
an expensive active component or complex software solution.<br />
To use an example, R&M consistently extends its principle of modularity into the area of security. A central<br />
element here is color coding. Colored patch cords are generally used in data centers. The purpose of the colors is<br />
to indicate different type of IT services. This method is effective but inefficient, since it makes it necessary for the<br />
data center to stock up on different types of cables. In addition, it is not always possible to ensure that the correct<br />
colors are used – especially after large-scale system failures occur, or in the case of changes that occur over a<br />
long period of time. Incorrectly patched color-coded cables have a detrimental effect on the IT Service. This may<br />
result in chain reactions up to and including total system failure.<br />
By contrast, R&M’s security solution is based on color marking components for the ends of patch cords. They are<br />
easy to put on and replace. The cable itself does not need to be pulled out or removed. A color coding system that<br />
is equally easy to apply can be used for patch panels. This allows connections to be assigned unambiguously. In<br />
addition, this system reduces costs for stocking up on patch cords.<br />
Furthermore, the autonomous security system R&M offers as an extension to its cabling systems is characterized<br />
by mechanical coding and locking solutions. This system ensures that cables are not connected to incorrect ports,<br />
or that a cable – whether it be a copper or glass fiber cable – always stays in its place. The connection can only<br />
be opened with a special key.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 115 of 156
www.datacenter.rdm.com<br />
R&M’s visual and mechanical coding system and locking solution together provide an enormous improvement to<br />
passive data center security.<br />
Security solutions from<br />
R&M, ranging from<br />
insertable color coding<br />
units (left) to locks on<br />
patch cords (right)<br />
3.9.4 Glass Fiber Cables (Fiber optic)<br />
Glass fiber is a fascinating medium. It promises bandwidth that is virtually without limits. However, it is also a delicate,<br />
brittle medium. It takes a skilled person to handle glass fiber cabling. Nevertheless, by using proper planning<br />
and well-thought-out product selection, data centers can get a successful and problem-free start into their glass<br />
fiber future. The continuously growing requirements of data centers leave no other choice. Nowadays, data<br />
centers must include glass fibers in their planning to enable them to cope with migrating the data center to 40/100<br />
gigabit Ethernet, growing data volume and many other requirements beyond that.<br />
Scalability, performance, access time, security, flexibility, and<br />
so on can be optimized through the use of glass fibers. Advances<br />
in glass fiber technology, practical knowledge from<br />
manufacturers and revolutionary standards help users to make<br />
the right decisions during the planning process and to invest in<br />
a future-proof installation. An overview appears below of the<br />
fundamental topics, terminologies and issues that play a role in<br />
making plans for a glass fiber cabling system in a data center.<br />
We then present an orientation for decision makers, and information<br />
on some progressive glass fiber solutions that are<br />
currently available on the market.<br />
3.9.5 Multimode, OM3/4<br />
Optical fibers can be differentiated between glass optical media (Glass Optical Fiber, GOF) and plastic (Plastic<br />
Optical Fiber, POF). Due to its clearly lower performance capabilities, POF is not suitable for data center applications.<br />
One can differentiate between three types of glass fibers:<br />
• Multimode glass fibers with step index profile<br />
• Multimode glass fibers with gradient index profile<br />
• Single mode glass fibers<br />
Multimode and single mode specify whether<br />
the glass fiber is designed for multiple<br />
modes, or only for one mode. By means of<br />
a “smooth” transition between the high and<br />
low refractive index of the medium, the modes<br />
in multimode glass fibers with a gradient<br />
Core glass (10, 50, 62.5 µm)<br />
Cladding glass (125 µm)<br />
index profile, in contrast to those with a step<br />
index profile, no longer run straight, but in a<br />
Primary coating (0.2 ... 0.3 mm) wave shape around the fiber axis, which<br />
evens out modal delays (minimization of<br />
Secondary coating (0.6 ... 0.9 mm)<br />
modal dispersion). Because of its good costbenefit<br />
ratio (production expense to band-<br />
Structure of a glass fiber (core: 10 µm single mode, 50 & 62.5 µm multimode)<br />
width length product) this glass fiber type<br />
has virtually established itself as the standard fiber for high-speed connections over short to medium distances,<br />
e.g. in LANs and in data centers. Apart from attenuation in db/km, the bandwidth length product (BLP) is the most<br />
important performance indicator for glass fibers. BFP is specified in MHz*km:<br />
a BLP of 1000 MHz*km means that the usable bandwidth comes to 1000 MHz over 1000 m or 2000 MHz over 500 m.<br />
Page 116 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
OM3 and OM4 are laser-optimized 50/125 µm multimode glass fibers. While OM1 and OM2 fibers are used with<br />
LEDs as signal sources, lasers are used for Category OM3 and OM4 fibers. In general, these are type VCSEL<br />
lasers (Vertical-Cavity Surface-Emitting Laser), that are considerably more cost-effective than Fabry-Perot lasers,<br />
for example. Lasers have the advantage that, unlike LEDs, they are not limited to a maximum frequency of<br />
622 Mbit/s and can therefore transmit higher data rates.<br />
The transmission capacities of multimode fibers of (almost) all types allow a data center to have a centrally<br />
constructed data cabling system. In contrast, a copper solution starting out from a single area distributor is unlikely<br />
to work in room sizes greater than about 1,600 m 2 . But just as the argumentation to use only one central cabling<br />
with glass fiber to the desk makes little sense in office cabling, there is also no specified requirement for a central<br />
cabling system in the data center. Instead, the standard permits the creation of different areas with different media<br />
and correspondingly different length restrictions (see tables on pages 100 and 101). So if the data center is to have<br />
a greater area, a main distributor must be provided which connects the linked area distributors over either copper<br />
or glass fiber. The cabling coming from the area distributor could then be realized again in twisted pairs, similar to<br />
the floor distributors.<br />
In contrast to glass fibers, which, when using a media of minimum OM3 quality, are without a doubt well-suited for<br />
data rates higher than 10 Gbit/s, a similar suitability for twisted pair cables has not yet been identified. If one<br />
follows ISO/IEC 24764, EN 50173-5 or the new TIA-942A, new installations of OM1 and OM2 are no longer allowed.<br />
This raises the question, what should one do with cabling systems where OM1 or OM2 was already used Do<br />
these need to be removed<br />
Of course, the problem of mixing glass fiber types does not come up only in data centers. In any glass fiber environment<br />
that has “evolved”, the question must be asked if an incorrect “interconnection” of old and new fibers<br />
could lead to errors. According to cable manufacturers, opinions among experts are split on this point. The<br />
predominant opinion is that with short cable paths, where an incorrect use of connection cables is likely, the<br />
difference in different modal dispersions should not have a negative effect. If this opinion should prove true, older<br />
fibers may certainly be kept as well in the data center, especially when incorrect connections can be avoided<br />
through the use of different connectors or codings or color marking on the adapters.<br />
Conclusion<br />
Anyone examining the future of data centers must inevitably<br />
devote special attention to OM4 glass fibers. And<br />
there are a few reasons in favor of a consistent use of<br />
these fibers, already today. They offer additional headroom<br />
for insertion loss over the entire channel (Channel<br />
Insertion Loss) and therefore for more plug connections.<br />
Similarly, the use of OM4 results in higher reliability of<br />
the overall network, a factor which will play a decisive<br />
role in up-coming applications of 40 and 100 gigabit<br />
Ethernet.<br />
Finally OM4 and its 150-meter range (as opposed to 100<br />
meters with OM3) provide a good reserve length, which<br />
is also covered by a higher attenuation reserve.<br />
3.9.6 Single mode, OS1/2<br />
In contrast to multimode glass fibers, only one light path exists for single-mode glass fibers due to its extremely<br />
thin core, typically a diameter of 9 µm. Since it is therefore impossible for multi-path propagation with signal delay<br />
differences between different modes to occur, extremely high transmission rates can be achieved over great<br />
distances. Also known as mono-mode fibers, single mode fibers can be used in the wavelength range from 1,280<br />
to 1,650 nm, in the second or third optical window. This results in a theoretical bandwidth of 53 THz.<br />
On the other hand, single-mode glass fibers place maximum demands on light injection and connection technology.<br />
They therefore come into use primarily in high-performance areas like MAN and WAN backbones.<br />
In order to do justice to the higher demands of optical networks with WDM and DWDM technology (Dense Wavelength<br />
Division Multiplexing) there exist dispersion-optimized monomode fibers: the Non Dispersion Shifted Fiber<br />
(NDSF), Dispersion Shifted Fiber (DSF) and Non Zero Dispersion Shifted Fiber (NZDSF), which were standardized<br />
by the ITU in its G.650 ff. recommendations.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 117 of 156
www.datacenter.rdm.com<br />
The following table shows the specifications of all standardized multimode and single-mode glass fiber types:<br />
Fiber Types and Categories<br />
Modes Multimode Single mode<br />
ISO/IEC 11801 class OM1 OM2 OM3 OM4 OS1 OS2<br />
IEC 60793-2<br />
10-A1b 10-A1a 10-A1a 10-A1a 50-B1.1 50-B.1.3<br />
category<br />
ITU-T type G.651 G.651 G.651 G.651 G.652 G.652<br />
Core/casting (typical) 62.5/125 µm 50/125 µm 50/125 µm 50/125 µm 9(10)/125 µm 9/125 µm<br />
Numeric aperture 0.275 0.2 0.2 0.2 -- --<br />
Attenuation dB/km (typical)<br />
for 850 nm 3.5 dB/km 3.5 dB/km 3.5 dB/km 3.5 dB/km -- --<br />
for 1,300 nm 1.5 dB/km 1.5 dB/km 1.5 dB/km 1.5 dB/km 1.0 dB/km 0.4 dB/km<br />
Bandwidth length product (BLP) MHz*km<br />
for 850 nm 200 MHz*km 500 MHz*km 1.5 GHz*km 3.5 GHz*km -- --<br />
for 1300 nm 500 MHz*km 500 MHz*km 500 MHz*km 500 MHz*km -- --<br />
Effective modal<br />
bandwidth<br />
-- -- 2 GHz*km 4.7 GHz*km -- --<br />
3.9.7 Plug Connectors for Glass Fiber Cables<br />
So we come to another point which must be defined: which is the best<br />
fiber optic connector<br />
The standard in this area leaves us very much on our own, since no<br />
clear focus exists on just one to two systems for glass fibers, as in the<br />
area of copper-based technologies. If you look back over the past 20<br />
years, you will find that almost none of the modern plug connectors<br />
for active components possesses the same timelessness as the RJ45<br />
does in the world of copper.<br />
Maximum transmission rates and universal network availability can be achieved through the use of high-quality<br />
plug connectors in all WAN areas, over the Metropolitan Area and Campus Network up to the Backbone and<br />
subscriber terminal. A basic knowledge in this area of Quality Grades for fiber optic connectors is indispensable<br />
for planners and installers. The following section provides information on current standards and discusses their<br />
relevance for product selection.<br />
Connector Quality and Attenuation<br />
Naturally, as in any other detachable connection in electrical and<br />
communication technology, losses in signal transmission occur with<br />
fiber optic connectors as well.<br />
Therefore, the primary goal in the development, manufacture and<br />
application of high-quality fiber optic connectors is to eliminate the<br />
cause of loss at fiber junctions wherever possible. This can only be<br />
achieved by means of specialized knowledge and years of experience<br />
in the areas of optical signal transmission and high-precision production<br />
processes.<br />
The extremely small diameter of glass fiber cores requires a maximum degree of mechanical and optical precision.<br />
Working with tolerances of 0.5 to 0.10 µm (much smaller than a grain of dust), manufacturers are moving toward<br />
the limits of fine mechanics and advancing their processes into the field of microsystems technology. No compromises<br />
can be made in this area.<br />
Page 118 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
In contrast to their electromagnetic counterparts, fiber optic connectors make no distinction between plugs and<br />
jacks. Fiber optic connectors include a ferrule that accepts the end of the fiber and positions it precisely. They are<br />
connected together by means of an adapter with an alignment sleeve. A complete connection consists of a<br />
connector/adapter/connector combination. Both ferrules with their fiber ends must meet each other in the inside of<br />
the connection in so precise a manner that a minimum amount possible of light energy is lost or scattered back<br />
(return loss). A crucial factor in this process is the geometric alignment and processing of fibers in the connector.<br />
Core diameters of 9 µm for single mode and 50 µm or 62,5 µm for multimode fibers, and ferrules with diameters of<br />
2.5 mm or 1.25 mm make it impossible to inspect the connector visually, without the use of aids. Of course, one<br />
can determine on site immediately whether a connector snaps and locks into place correctly. Users must be able<br />
to rely on manufacturer specifications for all other properties – the “inner qualities” – like attenuation, return loss,<br />
or mechanical strength.<br />
Analogous to copper cabling, an assignment into classes also exists for fiber optic channel links – not to be<br />
confused with categories (see table below) that describe fiber type and material. Per DIN EN 50173, a channel<br />
link contains permanently installed links (Permanent Links) as well as patching and device connection cables. The<br />
associated standardized transmission link classes like OF-300, OF-500 or OF-2000 show the permitted attenuation<br />
in decibels (dB) and maximum fiber length in meters, as displayed in the following table.<br />
Transmission Link Classes and Attenuation<br />
Class<br />
Realized in<br />
category<br />
Maximum transmission link attenuation in dB<br />
Multimode<br />
Single mode<br />
max. length<br />
850 nm 1300 nm 1310 nm 1550 nm<br />
OF-300<br />
OF-500<br />
OF-2000<br />
OM1 to OM4,<br />
OS1, OS2<br />
OM1 to OM4,<br />
OS1, OS2<br />
OM1 to OM4,<br />
OS1, OS2<br />
2.55 1.95 1.80 1.80 300 m<br />
3.25 2.25 2.00 2.00 500 m<br />
8.50 4.50 3.50 3.50 2,000 m<br />
OF-5000 OS1, OS2 -- -- 4.00 4.00 5,000 m<br />
OF-10000 OS1, OS2 -- -- 4.00 4.00 10,000 m<br />
The limit value for attenuation in transmission link classes OF-300, OF-500 and OF-2000 is based on the<br />
assumption that 1.5 dB must be calculated as the connection allocation (2 dB for OF-5000 and OF-10000). Using<br />
OF-500 with multimode glass fibers of 850 nm as an example, this results in the following table value: 1.5 dB<br />
connection allocation + 500 m x 3.5 dB/1,000 m = 3.25 dB.<br />
The attenuation budget (see the calculation example above) must be maintained in order to ensure a reliable<br />
transmission. This is especially important for concrete data center applications like 10 Gigabit Ethernet and 40/100<br />
gigabit Ethernet protocols in accordance with IEEE802.3ba. In these cases, an extremely low attenuation budget<br />
must be taken into account. Maximum link ranges result accordingly (see tables on page 101).<br />
The transmission quality of a fiber optic connector is essentially determined by two features:<br />
• Insertion Loss (IL)<br />
Ratio of light output in fiber cores before and after the connection<br />
• Return Loss (RL)<br />
Amount of light at the connection point that is reflected back to the light source<br />
The smaller the IL value and the larger the RL value, the better the signal transmission in a plug connection will<br />
be. ISO/IEC 11801 and EN 50173-1 standards require the following values for both single mode as well as multimode:<br />
Insertion Loss (IL)<br />
Return Loss (RL)<br />
< 0.75 dB for 100% of plug connections > 20 dB for multimode glass fibers<br />
< 0.50 dB for 95% of plug connections > 35 dB for single mode glass fibers<br />
< 0.35 dB for 50% of plug connections<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 119 of 156
www.datacenter.rdm.com<br />
As already mentioned, return loss is a measurement for the amount of light at the connection point that is reflected<br />
back to the light source. The higher the RL decibel value, the lower reflections will be. Typical RL values lie at 35<br />
to 50 dB for PC, 60 to 90 dB for APC and 20 to 40 dB for multimode fibers.<br />
Initially the end surface of fiber optic connectors was polished to a 90° angle to<br />
the fiber axis. Current standards require Physical Contact or Angled Physical<br />
Contact. The expression HRL (High Return Loss) is also used fairly often, and<br />
means the same thing as APC.<br />
In a PC surface polishing, the ferrule gets a convex polished end surface so fiber<br />
cores can make contact at their highest elevation points. As a result, the creation<br />
of reflections at the contact point is reduced.<br />
PC Physical Contact<br />
An additional improvement in return loss is achieved by means of APC angled<br />
polishing technology. Here, the convex end surfaces of the ferrule are polished<br />
angled (8°) to the axis of the fiber.<br />
SC connectors are also offered with a 9° angled poli shing. They have IL and RL<br />
values that are identical to the 8° version and hav e therefore not gained acceptance<br />
worldwide.<br />
APC Angled Physical Contact<br />
As a comparison: the fiber itself has a return loss of 79.4 dB in case of a 1310 nm fiber, 81.7 dB for 1550 nm and<br />
2.2 dB for 1625 nm (all values for a pulse length of 1 ns).<br />
Different amounts of light or modes are diffused and scattered back, depending on the<br />
junction of two fibers, eccentricities, scratches and impurities (red arrow).<br />
APC connector that is polished and cleaned well has about 14.7 dB RL against air and 45<br />
to 50 dB when plugged in.<br />
8°<br />
With APC connectors, modes are also scattered back as a result of the 8° or 9° polishing,<br />
though at an angle that is greater than the angle of acceptance for total reflection. As a<br />
result, modes are not transmitted. The calculation of the angle of acceptance using<br />
−1<br />
−1<br />
NA G . 652 D<br />
= sin Θ ⇒ Θ = sin ( NAG.652<br />
D<br />
) = sin (0.13) = 7.47°<br />
shows that all modes which have an angle greater than 7.5° are decoupled after a few<br />
centimeters and therefore do not reach the source and interfere with it. A quality APC<br />
connector has at least 55 dB RL against air and 60 to 90 dB when plugged in.<br />
Quality Grades<br />
A channel’s attenuation budget is burdened substantially by the connections. In order to guarantee the compatibility<br />
of fiber optic connectors from different manufacturers, manufacturer-neutral attenuation values and geometric<br />
parameters for single-mode connectors were defined in 2007 via the IEC 61753 and IEC 61755-3-1/-2<br />
standards. The attenuation values established for random connector pairings, also known as Each-to-Each or<br />
Random-Mate, come significantly closer to actual operating connections than the attenuation values specified by<br />
manufacturers.<br />
One novelty of these Quality Grades is their requirements for mean (typical) values. This provides an optimal<br />
basis for the calculation of path attenuation. Instead of calculating connectors using maximum values, the specified<br />
mean values can be used.<br />
A grade M was also considered for multimode connectors in the drafts of the standards, but it was rejected in the<br />
standard that was adopted. Since then, manufacturers and planners have been getting by using information from<br />
older or accompanying standards to find guideline values for multimode connectors. R&M uses Grade M, in the<br />
way it was described shortly before the publication of IEC 61753-1. These values were supported by ISO/IEC with<br />
a required insertion loss of < 0.75 dB per connector.<br />
These standards for multimode connector quality already were no longer sufficient with the widespread<br />
introduction of 10 Gigabit Ethernet, but especially with future Ethernet technologies which provide 40 Gbit/s and<br />
100 Gbit/s.<br />
Page 120 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
An example of this:<br />
It should be possible to transmit 10<br />
gigabit Ethernet per IEEE 802.3ae<br />
standards over the maximum distance<br />
of 300 m with OM3 fibers. In accordance<br />
with the standard, 1.5 dB will<br />
remain for connection losses after deducting<br />
fiber attenuation and power<br />
penalties (graphic, red line). Operating<br />
with current grade M multimode connectors<br />
and an insertion loss of 0.75 dB<br />
per connector, two plug connections<br />
would therefore be possible. However,<br />
this is not very realistic.<br />
Looking at it from the other side, assume<br />
that we want to use two MPO<br />
and two LC plug connections. With an<br />
insertion loss of 0.75 dB per connection,<br />
total attenuation will amount to 3 dB.<br />
As a result, the link could only measure<br />
about 225 meters (graphic, green line).<br />
Determining the Attenuation Budget of an OM3 Fiber<br />
with 10 gigabit Ethernet<br />
Blue: Remaining attenuation budget for connections<br />
Red: Remaining budget for 300-meter length<br />
Green: Remaining line length for two MPO and two LC connectors<br />
The same example, using a data rate of 40 Gbit/s, decreases the link to 25 meters. This shows that the attenuation<br />
budget available for connections becomes smaller and smaller as the data rate increases.<br />
Since up to this point the IEC 61755 group of standards could not present any reliable, and above all meaningful<br />
values, R&M defined its own quality grades for multimode connectors – based on the empirical values acquired<br />
over years of experience. These multimode grades are based on the classification for single mode, extended by<br />
grade E M or grade 5 for PCF and POF connectors.<br />
Grades in accordance with IEC 61753-1:<br />
IL, Insertion Loss > 97 % Mean Comments<br />
Grade A < 0.15 DB < 0.07 dB Not yet conclusively established<br />
Grade B < 0.25 dB < 0.12 dB<br />
Grade C < 0.50 dB < 0.25 dB<br />
Grade D < 1.00 dB < 0.50 dB<br />
Grade M < 0.75 dB (100 %) < 0.35 dB Not specified in IEC 61753-1<br />
RL, Return Loss 100 % Comments<br />
Grade 1 > 60 dB > 55 dB in unplugged state (APC only)<br />
Grade 2<br />
Grade 3<br />
Grade 4<br />
> 45 dB<br />
> 35 dB<br />
> 26 dB<br />
Grades for R&M multimode connectors:<br />
IL 100 % 95 % Median RL 100 % Comments<br />
Grade A M 0.25 dB 0.15 dB 0.10 dB Grade 2 45 dB<br />
Grade B M 0.50 dB 0.25 dB 0.15 dB Grade 3 35 dB<br />
Only LSH, SC and LC<br />
single-fiber connectors<br />
Only single-fiber<br />
connectors<br />
Grade C M 0.60 dB 0.35 dB 0.20 dB Grade 4 26 dB Multi-fiber connectors<br />
Grade D M 0.75 dB 0.50 dB 0.35 dB Grade 4 26 dB Similar to current grade M<br />
Grade E M 1.00 dB 0.75 dB 0.40 dB Grade 5 20 dB For PCF and POF<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 121 of 156
www.datacenter.rdm.com<br />
According to the graphic (page 121, red line) a maximum attenuation of 1.5 dB is available for four plug<br />
connections. By using the new multimode grades, we get a maximum attenuation of 1.2 dB with a certainty of<br />
99.9994% for two MPO grade C M connectors and two LC grade B M connectors, or a maximum 0.7 dB with a<br />
certainty of 93.75%. The maximum target length of 300 m can therefore be achieved without a problem.<br />
In this way, users also gain freedom and security in planning in multimode as well. If more connections are<br />
required, one just selects connectors from a higher quality class. The previous practice of using maximum values<br />
leads to incorrect specifications. The median, or 95% value, should be used in the planning process instead.<br />
As already required with single mode grades, multimode grades also require each-to-each guaranteed values (i.e.<br />
where connectors are measured against connectors). Information taken from measurements against a reference<br />
connector – a normal procedure with competitors – are not helpful, but instead lead to pseudo-security in the<br />
planning process. To be able to plan a future-proof network and to cope with the diminishing attenuation budgets,<br />
a reliable classification system by means of grades for single and multimode components is crucial.<br />
Connector Types<br />
The LC and MPO connector were defined for data center applications in accordance with ISO/IEC 24764, EN 50173-5<br />
and TIA-942 standards for fiber optic cabling systems.<br />
MPO (IEC 61754-7)<br />
LC Connector (IEC 61754-20)<br />
The MPO (Multi Patch Push-on) is based on a plastic ferrule that allows of up to 24<br />
fibers to be housed in one connector. In the meantime, connectors with up to 72 fibers<br />
are already in development.<br />
This connector stands out for its compact design and easy operation, but brings disadvantages<br />
in optical performance and reliability.<br />
This connector is of crucial importance because of its increased packing density and<br />
ability to migrate to 40/100 gigabit Ethernet.<br />
This connector is part of a new generation of compact<br />
connectors. It was developed by Lucent (LC stands for Lucent<br />
Connector). Its design is based on a 1.25 mm-diameter ferrule.<br />
The duplex coupling matches the size of an SC coupling. As a<br />
result, it can achieve extremely high packing densities, which<br />
makes the connector attractive for use in data centers.<br />
Other common connector types, sorted by IEC 61754-x, include:<br />
ST Connector (also called BFOC, IEC 61754-2)<br />
These connectors, with a bayonet locking mechanism, were the<br />
first PC connectors (1996). Thanks to the bayonet lock and<br />
extremely robust design, these connectors can still be found in<br />
LAN networks around the world (primarily in industry). ST designates<br />
a “straight” type.<br />
DIN/LSA (optical fiber plug connector, version A, IEC 61754-3, DIN 47256)<br />
Compact connector with screw lock, only known in the German-speaking world.<br />
SC Connector (IEC 61751-4)<br />
This connector type with its square design and push/pull system<br />
is recommended for new installations (SC stands for Square<br />
Connector or Subscriber Connector). It makes high packing<br />
density possible because of its compact design, and can be<br />
combined into duplex and multiple connections. Despite its age,<br />
the SC continues to gain in importance because of its outstanding<br />
properties. Up to this day, it remains the most important WAN<br />
connector worldwide, usually as a duplex version, due to its good optical properties.<br />
Page 122 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
MU Connector (IEC 61754-6)<br />
The first small form connector was the MU connector. Based on its 1.25 mm ferrule, its appearance and functionnality<br />
is similar to the SC, in half the model size.<br />
FC (Fibre Connector, IEC 61753-13)<br />
E-2000 (LSH, IEC 61753-15)<br />
MT-RJ (IEC 61751-18)<br />
A robust, proven first generation connector. The first true WAN<br />
connector, and still millions in use. Disadvantageous for use in<br />
confined space conditions because of its screw cap and therefore<br />
not considered for use in current racks with high packing<br />
densities.<br />
This connector is a development from Diamond SA and is geared<br />
to LAN and CATV applications. It is produced by three licensed<br />
manufacturers in Switzerland, which has also led to its unequalled<br />
quality standard. The integrated dust shutter not only provides<br />
protection from dust and scratches, but also from laser beams.<br />
The connector is can be locked using frames and levers which<br />
can be color coded and mechanically coded.<br />
The MT-RJ is a connector for LAN applications. Its appearance matches that of the familiar RJ45 connector from<br />
the copper world. It is used as a duplex connector.<br />
F-3000 (IEC 61754-20 compatible)<br />
LC-compatible connector with dust-laser protection cover.<br />
F-SMA (Sub-Miniature Assembly, IEC 61754-22)<br />
Connector with screw cap and no physical contact between ferrules. The first standardized optical fiber connector,<br />
but only used today for PCF/HCS and POF.<br />
LX.5 (IEC 61754-23)<br />
Similar to the LC and F-3000 in size and shape, but only partially compatible with these connectors because of<br />
different ferrule distances in duplex mode. Developed by ADC.<br />
SC-RJ (IEC 61754-24)<br />
As the name reveals, the developers at R&M geared this<br />
connector to the RJ45 format. Two SCs form one unit in the<br />
design size of an RJ45. This corresponds to SFF (Small Form<br />
Factor). The 2.5 mm ferrule sleeve technology is used for this<br />
connector. As compared to the 1.25 mm ferrule, it is more robust<br />
and more reliable. The SC-RJ therefore stands out not only<br />
because of its compact design, but also through its optical and<br />
mechanical performance.<br />
It is a jack-of-all-trades – usable from grade A to M, from single mode to POF, from WAN to LAN, in the lab or outdoors.<br />
R&M has published a white paper on SC-RJ (“SC-RJ – Reliability for every Category”).<br />
Adapters<br />
The best optical fiber connector will not be of any use at all if it is combined with a bad adapter. This is why R&M<br />
adopted its grade philosophy for adapters as well. The following values were the basis for this:<br />
Optical properties, IL Insertion Loss Grade B Grade C Grade D Grade M Grade N<br />
Sleeve material Ceramic Ceramic<br />
Attenuation variation (IL) Delta<br />
IEC61300-3-4<br />
Phosphor<br />
bronze<br />
Ceramic<br />
Phosphor<br />
bronze<br />
0.1 dB 0.2 dB 0.3 dB 0.2 dB 0.3 dB<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 123 of 156
www.datacenter.rdm.com<br />
Summary<br />
Despite the validity of cabling standards for data centers, it<br />
must be assumed that a cabling system designed in accordance<br />
with these standards will not necessarily ensure the<br />
same usability period as required standards for office<br />
cabling. Designs with a requirements-oriented cabling<br />
system, and the modified design and material selection that<br />
are derived from this, must not be described as incorrect.<br />
They represent an alternative and may be a reasonable<br />
option in smaller data centers or server rooms.<br />
<strong>Data</strong> centers must be planned like a separate building<br />
within a building, and a cabling system that is structured<br />
and static therefore becomes more likely – at least between<br />
distributors. In any event, planners must ensure that capacity<br />
(in terms of transmission rate) is valued more than<br />
flexibility. Quality connection technology must ensure that<br />
fast changes and fast repairs are possible using standard<br />
materials. This is the only way to be able to guarantee the<br />
high availabilities that are required.<br />
<strong>Data</strong> center planners and operators will also face the<br />
question of whether using single mode glass fiber is still<br />
strictly necessary. The performance level of the laseroptimized<br />
multimode glass fibers available today already<br />
open up numerous options for installation. The tables above<br />
showed what potentials can be covered with the OM3 and<br />
OM4 fibers. Do single-mode fibers therefore have to be<br />
included in examinations of transmission rate and transmission<br />
link<br />
A criterion that may make single mode installations absolutely necessary is, for example, the length restriction for<br />
OM3/OM4 from IEEE 802.3ba (40/100 GbE). The 150 meter limitation with OM4 fiber for 40/10 Gigabit Ethernet<br />
would likely be exceeded in installations with extensive data center layouts. Outsourced units, like backup<br />
systems stored in a separate building, could not be integrated at all. In this case, single mode is the only technical<br />
option. A cost comparison is not necessary.<br />
The high costs of transceivers for single mode transmission technology are a crucial factor in configurations that<br />
can be used with either multimode or single mode. Experience shows that a single mode link costs three to four<br />
times more than a multimode link. With multimode up to double the port density be achieved. However, highly<br />
sophisticated parallel optic OM4 infrastructures also require considerable investments. The extent to which the<br />
additional expenses are compensated through lower cabling costs (CWDM on single mode vs. parallel optic OM4<br />
systems) can only be determined by an overall examination of the scenario in question. In addition, the comparably<br />
higher energy consumption is also a factor. Single mode consumes about three to five times more watts<br />
per port than multimode.<br />
Whether one uses multimode length restrictions as a basis for a decision in favor of single mode depends upon<br />
strategic considerations in addition to a cost comparison. Single mode offers a maximum of performance<br />
reserves, in other words a guaranteed future beyond current standards. Planners who want to eliminate restricting<br />
factors over multiple system generations and are willing to accept a bandwidth/length overkill in existing applications<br />
should plan on using single mode.<br />
3.10 Implementations and Analyses<br />
The 40 GbE specification is geared to High-Performance Computing (HPC) and<br />
storage devices, and is very well suited for the data center environment. It supports<br />
servers, high-performance clusters, blade servers and storage networks (SAN).<br />
The 100 GbE specification is focused on core network applications – switching,<br />
routing, interconnecting data centers, Internet exchanges (Internet nodes) and<br />
service provider Peering Points (PP). High-performance computing environments<br />
with bandwidth-intensive applications like Video-on-Demand (VoD) will profit from<br />
100 GbE technology. Since data center transmission links typically do not exceed<br />
100 m, 40/100 GbE components will be significantly more economical than OS1<br />
and OS2 single mode components, and still noticeably improve performance.<br />
Page 124 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
In contrast to some previous speed upgrades, in which new optical transceiver technologies, WDM (Wavelength<br />
Division Multiplexing) and glass fibers with higher bandwidth were implemented, the conversion to 40 GbE and<br />
100 GbE is implemented using existing photonic technology on a single connector interface with several pairs of<br />
transceivers. This new interface, known as Multi-Fiber-Push-On (MPO), presents an arrangement of glass fibers<br />
– up to twelve per row and up to two rows per ferrule (24-fiber) – as a single connector. It is relatively easy to<br />
implement 40 GbE, where four glass fiber pairs are used with a single transceiver with MPO interface. More<br />
difficult by far is the conversion to 100 GbE, which uses ten glass fiber pairs. There are a number of possibilities<br />
for arranging fibers on the MPO interface – right up to the two-row, 24-fiber MPO connector.<br />
The next greatest challenge for cable manufacturers is the different signal delay time (Delay Skew). A single<br />
transceiver for four fibers must separate four signals then combine them back together. These signals are<br />
synchronized. Every offset between arrival of the first signal and arrival of the last has a detrimental effect on<br />
overall throughput, since the signals can only be re-bundled when all have arrived. The difference in signal delay,<br />
that in the past played a secondary role for serial devices, has now become a central aspect of the process.<br />
Ensuring the conformity of these high-performance cables requires specifications that are carefully worked out<br />
and a manufacturing process that uses the strictest of tolerances. It is up to cable manufacturers to implement<br />
their own processes for achieving conformity with standards during the cable design process. This includes the<br />
use of ribbon fiber technology and, more and more, the use of traditionally manufactured glass fiber cables.<br />
Traditional cables can provide advantages with respect to strength and flexibility and still have a high fiber<br />
density – up to 144 fibers in one cable with a 15 mm diameter. In many large data centers with tens of thousands<br />
of fiber optic links the need exists for faster solutions with an extremely high density, and which can be easily<br />
managed and scaled to a high bandwidth requirement.<br />
Another important factor is to retain the correct fiber arrangement (Polarity), regardless of whether the optical<br />
fiber installation is performed on site in a conventional manner, or using pre-assembled plug-and-play components.<br />
The transmitting port at one end of the channel must be linked to the receiving port at the other end – a<br />
very obvious matter. However, this situation did lead to confusion in the case of older single mode Simplex ST and<br />
FC connectors. Back then it was usually enough just to swap the connector at one end if a link could not be<br />
achieved.<br />
With the increased use of MPO multi-fiber connectors, it is no longer a good idea just to turn the connector<br />
around. The twelve fibers in the ferrule have already been ordered in the factory. There are various options for<br />
assigning fibers. These options are part of the different cabling standards, and are documented in the fiber<br />
assignment list. It is important to know that in two of the polarity variants listed, the assignment of components at<br />
the end of the transmission channel are different from each other, which can easily lead to problems when<br />
connectors are re-plugged and if care was not taken to ensure that the glass fiber components were installed<br />
according to instructions.<br />
Introductory information on parallel optic connection technology and MPO technology follows below. In addition,<br />
performance and quality criteria are listed to provide decision makers an initial orientation when planning their<br />
optical fiber strategy and selecting connection technology. The migration path to 40/100 GbE is also listed and<br />
discusses polarity methods.<br />
This last section also covers the bundling of power and data transmission into one cable – Power over Ethernet<br />
(PoE & PoEplus). A separate power feeding system for IP cameras, Wireless Access Points, IP telephones and<br />
other devices becomes unnecessary. Application possibilities increase greatly when higher power can be<br />
provided. That is why PoEplus was introduced in 2009. More electrical energy on the data cable automatically<br />
means more heat to wire cores. This is obviously a risk factor. Anyone planning data networks that use PoEplus<br />
must therefore be especially careful when selecting a cabling system and take into account some limitations under<br />
certain circumstances. However, the heating problem can be managed through consistent compliance with<br />
existing and future standards so that no problems will come up during data transmission. There is another risk<br />
factor to consider: the danger of contact damage from sparks when unplugging under load. R&M tests show that<br />
high-quality, stable solutions will ensure continuous contact quality.<br />
The following explanations will provide support when planning data networks that use PoE and refer to stillunsolved<br />
questions in the current standardization process.<br />
Changes were made in Appendix 1 of the ISO 11801 2008 standard with respect to the definition of the Class E A<br />
cabling structure. As a consequence, permanent links below 15m are no longer allowed for the reference implementation.<br />
However, link lengths that are shorter than 15 m are common in various applications, especially in data<br />
centers.<br />
Information therefore appears below on the earliest development in the area of standardized copper cabling<br />
solutions for Short Links, and lists new options for providing an orientation for planning high-performance data<br />
networks and decision aids for product selection.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 125 of 156
www.datacenter.rdm.com<br />
Two examinations follow. First, the capabilities of Class E A and F A cabling systems, or Cat. 6 A and 7 A components<br />
are examined more closely and compared with one another. In the process, a comparison of the transmission<br />
capacities of the different classes is carried out and critical cabling parameters listed. In addition, the EMC<br />
behaveior of shielded and unshielded cabling systems for 10GBase-T is examined using an independent study.<br />
3.10.1 Connection Technology for 40/100 gigabit Ethernet (MPO/MTP ® )<br />
As shown in the previous chapter, parallel optical channels with multi-fiber multimode optical fibers of the categories<br />
OM3 and OM4 are used for implementing 40 GbE and 100 GbE. The small diameter of the optical fibers poses<br />
no problems in laying the lines, but the ports suddenly have to accommodate four or even ten times the number of<br />
connectors. This large number of connectors can no longer be covered with conventional individual connectors.<br />
That is why the IEEE 802.3ba standard incorporated the MPO connector for 40GBASE-SR4 and 100GBASE-SR10.<br />
It can contact 12 or 24 fibers in the tiniest of spaces. This chapter describes this type of connector and explains<br />
how it differs from the much improved MTP ® connectors of the kind R&M offers.<br />
MPO Connector for accommodating 24 fibers<br />
The MPO connector (known as multi-fiber push-on and also as multipath<br />
push-on) is a multi-fiber connector defined according to IEC<br />
61754-7 and TIA/EIA 604-5 that can accommodate up to 72 fibers in<br />
the tiniest of spaces, comparable to an RJ45 connector.<br />
MPO connectors are most commonly used for 12 or 24 fibers.<br />
The push-pull interlock with sliding sleeve and two alignment pins are<br />
meant to position the MPO connector exactly for more than 1000 insertion<br />
cycles.<br />
Eight fibers are needed for 40 GbE and twenty for 100 GbE. That means four contacts remain non-interconnected<br />
in each case. The diagrams below show the connection pattern:<br />
MPO Connectors, 12-fold (left) and 24-fold (right). The fibers for sending (red) and receiving (green) are color-coded.<br />
As with every connector, the<br />
quality of the connection for<br />
the MPO connector depends<br />
on the precision of the contacting.<br />
In this case, however,<br />
that precision must be<br />
executed 12-fold or 24-fold.<br />
The fibers are usually glued<br />
into holes within the ferrule body. Such holes have to be larger than the fiber itself to allow the fiber to be fed<br />
through so there is always a certain amount of play in the hole.<br />
This play causes two errors that are crucial for attenuation:<br />
• Angle error (angle of deviation):<br />
The fiber is not exactly parallel in the hole but at an angle of deviation. The fibers are therefore inclined<br />
when they come into contact with each other in the connector and are also radially offset in relationship<br />
to each other. The fibers are also subject to greater mechanical loading.<br />
• Radial displacement (concentricity):<br />
The two fiber cores in a connector do not touch each other fully but somewhat offset in relationship to<br />
each other. This is referred to as concentricity. The true cylindrical center is taken and rotated once<br />
around the reference center. This results in a new cylinder whose diameter is defined as the concentricity<br />
value. In accordance with EN 50377-15-1, the concentricity of fiber holes in the MPO ferrule is allowed to<br />
be no more than 5 µm.<br />
One often talks about eccentricity, too, in connection with radial displacement. This is a vector indicating<br />
the distance of the true radial center of the cylinder from the reference center and the direction of<br />
deviation. To determine the concentricity, the true cylinder center is rotated once around the reference<br />
center. This value is therefore twice as big as the value for eccentricity but contains no information about<br />
the direction of deviation.<br />
Page 126 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
In both cases, the consequences are higher insertion loss and return loss because part of the light is decoupled or<br />
reflected instead of being transmitted.<br />
MPO connectors (and the MTP ® connectors described below) are no longer terminated on site because of the delicate<br />
multi-fiber structure and narrow tolerances involved. MPO/MTP ® connectors are therefore sold already<br />
terminated together with trunk cables. With this arrangement, customers have to plan line lengths precisely but<br />
are also assured top quality and short installation times.<br />
MTP ® connectors with Elite ® ferrules from R&M<br />
Fibers are glued directly into the plastic body of the ferrule with MPO connectors, making reworking later on difficult<br />
and putting limits on manufacturing accuracy. Angle errors and radial displacement can be reduced only to a<br />
certain degree.<br />
To achieve lower tolerances and better attenuation values, the American connectivity specialist US Conec developed<br />
the MTP ® connector (MTP = mechanical transfer push-on). It has better optical and mechanical quality than<br />
the MPO. Unlike the MPO connector, an MTP ® connector consists of a housing and a separate MT ferrule (MT =<br />
mechanical transfer). The MTP ® connector differs from the MPO in various ways. e.g. rounded pins and oval shaped<br />
compression springs. These features prevent scratches during plug-in and protect the fibers in the transition<br />
area from connector to cable. This design makes the connectors highly stable mechanically.<br />
The MT ferrule is a multi-fiber ferrule in which the fiber alignment depends on the eccentricity and positioning of the<br />
fibers and the holes drilled in the centering pins. The centering pins help control fiber alignment during insertion.<br />
Since the housing is detachable, the ferrules can undergo interferometric measurements and subsequent processing<br />
during the manufacturing process. In addition, the gender of the connector (male/female) can be changed at the<br />
last minute on site. A further advance from UC Conec is that the ferrules can be moved longitudinally in the housing.<br />
That allows the user to define the applied pressure.<br />
A further improvement is the multimode (MM) MT Elite ® ferrule. It has about 50 percent less insertion loss and the<br />
same return loss as the MM MT standard ferrule, which typically has 0.1 dB insertion loss. Tighter process tolerances<br />
are what led to these better values. Series of measurements from the R&M Laboratory confirm the advantages<br />
of the R&M Elite ® ferrule. The histograms in Figure below and the table illustrate the qualitative differences<br />
of the two ferrule models. The random-mated measurements with R&M MTP ® patch cords were carried out in<br />
accordance with IEC 61300-3-45, one group with Standard and another with Elite ® ferrules. Thus, connections<br />
with R&M MTP ® Elite plugs, for example, have a 50 percent probability of exhibiting insertion loss of just 0.06 dB or<br />
less. There is a probability of 95 percent that these same connections will have optical losses of 0.18 dB or less.<br />
Insertion loss characteristic of Standard (left) and Elite ferrules (right) from the series of R&M measurements<br />
Ferrulen Typ 50% 95%<br />
Standard ≤ 0.09 dB ≤ 0.25 dB<br />
Elite ≤ 0.06 dB ≤ 0.18 dB<br />
50% and 95% values for Standard and Elite ® ferrules<br />
For its own MPO solutions, R&M makes exclusive use of MTP ® connectors with Elite ® ferrules. However, it has<br />
further improved the already good measured values in a finishing process specially developed for the purpose. In<br />
other words, R&M not only ratcheted up the EN-specified tolerances for the endface geometry of MPO connectors<br />
but also defined new parameters. Finishing is done in a special high-end production line.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 127 of 156
www.datacenter.rdm.com<br />
The criteria R&M sets for each processing step and parameter and for the statistical processes in quality management<br />
are basically more stringent than the relevant standards in order to guarantee users maximum operational<br />
reliability.<br />
Fiber Protrusion<br />
For instance, the EN 50377-15-1 requires that fibers in MPO/MTP ® connectors protrude at least one µm beyond<br />
the end of the ferrule and have physical contact with their counterpart. The reason is that the ferrule of an<br />
MPO/MTP ® has an extremely large contacting surface area. This large area equally distributes the spring power<br />
pressing the ferrules together across all fibers in each connector. If fibers are too short, there is a risk that just the<br />
surfaces of the two ferrules that are stuck together will come into contact with each other, thus preventing the two<br />
fibers from touching. Fiber height is therefore a critical factor for connection performance.<br />
At the same time, all fiber heights in an<br />
MPO/MTP ® connector also have to be<br />
within a certain range. This range is defined<br />
as the distance between the shortest<br />
and the longest fiber and naturally has to<br />
be minimized to ensure that all fibers<br />
touch in a connection. The situation is<br />
exacerbated by the fact that the fiber<br />
heights are not linear arrangements. Variation<br />
in fiber heights arises during the<br />
polishing of fiber endfaces and can only<br />
be reduced by working with extreme care<br />
using high-quality polishing equipment<br />
and suitable polishing agents. Although operators should always try to keep all fibers at the same height during<br />
polishing they will not fully succeed in avoiding a certain deviation in the sub-micrometer range. R&M has optimized<br />
this work step in such a way as to reduce the tight tolerances in EN 50377-15-1 yet again by 1 to 3.5 µm.<br />
But one not only has to keep in mind the height differences between all 12 or 24 fibers but also between the two<br />
adjoining fibers in each case. As mentioned above, the spring power is ideally distributed evenly across all fibers.<br />
At the same time, fibers become compressed if subjected to pressure. If a short fiber is flanked by two adjoining<br />
higher fibers, there is a chance it will not contact its counterpart in the other connector and thus increase the<br />
amount of insertion loss and return loss.<br />
Core Dip<br />
Particular attention must be paid to what is called core dip. This is the place<br />
where the fibers come into contact with each other in a connection and is a<br />
major factor in determining insertion loss and return loss. Core dip is a dip in<br />
the fiber core, as illustrated in Figure. It occurs during polishing because the<br />
core is some what softer than the fiber sheath due to its allocated purpose.<br />
EN 50377-15-1 says the depth of the core dip is not allowed to be more than<br />
100 nm.<br />
In the case of a 24-fiber MPO/MTP® connector for 100GbE applications, the<br />
problem of optimum geometry becomes even more complex because these<br />
models have two rows with twelve fibers each that have to be polished. Optimizing the polish geometry is the<br />
prerequisite for ensuring the high quality of the connectors. But 100 percent testing during production is what<br />
guarantees compliance with these tolerances.<br />
Further parameters<br />
Besides fiber height and core dip, there are further parameters of crucial importance for the surface quality of<br />
fiber endfaces and thus for the transmission properties and longevity of a<br />
connector. Ferrule radius and fiber radius are two examples. Instead of focusing<br />
on just one value here, you must examine the geometry as a whole during<br />
production and quality control.<br />
The tight tolerances R&M set are tougher than those in EN 50377-15-1 and<br />
ensure the top quality of the connectors. But 100 percent testing during<br />
production is what guarantees compliance with these tolerances. R&M therefore<br />
subjects all connector endfaces to an interferometric inspection (Figure).<br />
Defective ferrules are reworked until they comply with the specified tolerances.<br />
Page 128 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The insertion loss and return loss of all connectors are also checked. To reduce the insertion loss of connectors,<br />
the offset of two connected fibers must be as small as possible. R&M likewise measures the internal polarity of the<br />
fibers in all MPO/MTP ® cables to be sure that it is correct. Polarity refers to how the fibers are connected, e.g.<br />
fiber Tx1 (transmitting channel 1) leads to Rx1 (receiving channel 1), Tx2 to Rx2, Tx3 to Rx3, etc.<br />
Trunk cables<br />
On-site termination of an MPO/MTP ® connector with 12, 24 or even up to<br />
72 fibers is obviously no longer possible. In other words, if you use MPO<br />
connectors you also have to use trunk cables delivered already cut to<br />
length and terminated.<br />
This approach requires greater care in planning but has a number of<br />
advantages:<br />
• Higher quality<br />
Higher quality can usually be achieved with factory termination and testing of each individual product. A<br />
test certificate issued by the factory also serves as long-term documentation and as quality control.<br />
• Minimal skew<br />
The smallest possible skew between the four or ten parallel fibers is crucial to the success of a parallel<br />
optical connection. Only then can the information be successfully synchronized and assembled again at<br />
the destination. The skew can be measured and minimized with factory-terminated trunk cables.<br />
• Shorter installation time<br />
The pre-terminated MPO cable system can be incorporated and immediately plugged in with its plug and<br />
play design. This design greatly reduces the installation time.<br />
• Better protection<br />
All termination is done in the factory, so cables and connectors are completely protected from ambient<br />
influences. FO lines lying about in the open in splice trays are exposed at least to the ambient air and may<br />
age more rapidly as a result.<br />
• Smaller volume of cable<br />
Smaller diameters can be achieved in the production of MPO cabling from FO loose tube cables. The<br />
situation changes accordingly: Cable volume decreases, conditions for air-conditioning in data centers improve<br />
and the fire load declines.<br />
• Lower total costs<br />
In splice solutions, a number of not always clearly predictable factors set costs soaring, e.g. splicing<br />
involving much time and equipment, skilled labor, meters of cable, pigtails, splice trays, splice protection<br />
and holders. By comparison, pre-terminated trunk cables not only have technical advantages but also<br />
usually involve lower total costs than splice solutions.<br />
3.10.2 Migration Path to 40/100 gigabit Ethernet<br />
MPO connectors contact up to 24 fibers in a single<br />
connection. A connection must be stable and its ends<br />
correctly aligned. These aspects are essential for<br />
achieving the required transmission parameters. A<br />
defective connection may even damage components<br />
and or cause the link to fail altogether.<br />
MPO connectors are available in a male version (with<br />
pins) or a female version (without pins). The pins<br />
ensure that the fronts of the connectors are exactly<br />
aligned on contact and that the endfaces of the fibers<br />
are not offset.<br />
MPO male with pins (left) and MPO female without Pins (right)<br />
Two other clearly visible features are the noses and guide grooves (keys) on the top side. They ensure that the<br />
adapter holds the connector with the correct ends aligned with each other.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 129 of 156
www.datacenter.rdm.com<br />
Adapters<br />
There are two types of MPO adapters based on the placement of the key:<br />
Adapters key-up to key-down (left), key-up to key-up (right)<br />
• Type A: Key-up to key-down<br />
Here the key is up on one side and down on the other.<br />
The two connectors are connected turned 180° in<br />
relation to each other.<br />
• Type B: Key-up to key-up<br />
Both keys are up. The two connectors are connected<br />
while in the same position in relation to each other.<br />
Connection rules<br />
1. When creating an MPO connection, always use one male connector and one female connector plus<br />
one MPO adapter.<br />
2. Never connect a male to a male or a female to a female.<br />
With a female-to-female connection, the fiber cores of the two connectors will not be at the exact same height<br />
because the guide pins are missing. That will lead to losses in performance.<br />
A male-to-male connection has even more disastrous results. There the guide pins hit against guide pins so<br />
no contact is established. This can also damage the connectors.<br />
3. Never dismantle an MPO connector.<br />
The pins are difficult to detach from an MPO connector and the fibers might break in the process. In addition,<br />
the warranty becomes null and void if you open the connector housing!<br />
Cables<br />
MPO cables are delivered already terminated. This approach requires greater care in planning in advance but has<br />
a number of advantages: shorter installation times, tested and guaranteed quality and greater reliability.<br />
Trunk cables/patch cables<br />
Trunk cables serve as a permanent link connecting the MPO modules to each other. Trunk cables are available<br />
with 12, 24, 48 and 72 fibers. Their ends are terminated with the customer's choice of 12-fiber or 24-fiber MPO<br />
connectors. MPO patch cords will not be used until 40/100G active devices are employed (with MPO interface).<br />
The ends of MPO patch cords are terminated with the customer's choice of 12-fiber or 24-fiber MPO connectors.<br />
Trunk/patch cables are available in a male – male version (left) and a female – female version (right)<br />
Harness cables<br />
Harness cables provide a transition from multi-fiber cables to individual fibers or duplex connectors. The 12-fiber<br />
harness cables available from R&M are terminated with male or female connectors on the MPO side; the whips<br />
are available with LC or SC connectors.<br />
Y cables<br />
Y cables are generally used in the 2-to-1 version. A typical application is to join two 12-fiber trunk cables to a 24-<br />
fiber patch cord as part of a migration to 100 GbE. The rather rare version of 1 to 3 allows three eight-fiber MTP<br />
connectors to be joined to a 24-fiber permanent link, e.g. for migration to 40GbE.<br />
Page 130 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Duplex patch cables<br />
Classic duplex cables are involved here, not MPO cables. They are available in a cross-over version (A-to-A) or a<br />
straight-through version (A-to-B) and are terminated with LC or SC connectors.<br />
Modules and adapter plates<br />
Modules and adapter plates are what connect the permanent link to the patch cable. The MPO module enables<br />
the user to take the fibers brought by a trunk cable and distribute them to a duplex cable. As already assembled<br />
units, the MPO modules are fitted with 12 or 24 fibers and have LC, SC or E2000 ∗ adapters on the front side<br />
and MPO at the rear.<br />
The adapter plate connects the MPO trunk cable with an MPO patch cord or harness cable. MPO adapter plates<br />
are available with 6 or 12 MTP adapters, type A or type B.<br />
MPO module with LC duplex adapters MPO adapter plate (6 x MPO/MTP ® )<br />
The polarity methods<br />
The connectors and adapters are coded throughout to ensure the correct orientation of the plug connection. The<br />
three polarity methods A, B and C as defined in TIA-568-C, for their part, are used to guarantee the right bidirectional<br />
allocation. This chapter describes these methods briefly.<br />
Method A<br />
Method A uses straight-through type A backbones (pin1 to pin1) and type A (key-up to key-down) MPO adapters.<br />
On one end of the link is a straight-through patch cord (A-to-B), on the other end is a cross-over patch cord (A-to-<br />
A). A pair-wise flip is done on the patch side. Note that only one A-to-A patch cord may be used for each link.<br />
MPO components from R&M have been available for Method A since 2007. It can be implemented quite easily,<br />
because e.g. just one cassette type is needed, and it is probably the most widespread method.<br />
Method B<br />
Method B uses cross-over type B backbones (pin1 to pin12) and type B (key-up to key-up) MPO adapters. However,<br />
the type B adapters are used differently on the two ends (key-up to key-up versus key-down to key-down),<br />
which requires more planning effort and expense. A straight-through patch cord (A-to-B) is used on both ends of<br />
the link.<br />
Method B does not enjoy wide use because of the greater planning effort and expense involved and because<br />
singlemode MPO connectors cannot be used. R&M does not support this method either or does so only on request.<br />
Method C<br />
Method C uses pair-wise flipped type C backbones and type A (key-up to key-down) MPO adapters. A straightthrough<br />
patch cord (A-to-B) is used on both ends of the link. In other words, the pair-wise flip of polarity occurs in<br />
the backbone, which increases the planning effort and expense for linked backbones. In even-numbered linked<br />
backbones, an A-to-A patch cord is needed.<br />
Method C is not very widespread either because of the greater planning effort and expense involved and because<br />
it does not offer a way of migrating to 40/100GbE. R&M does not support Method C or does so only on request.<br />
∗ E-2000, manufactured under license from Diamond SA, Losone<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 131 of 156
www.datacenter.rdm.com<br />
Method R<br />
Method R (a designation defined by R&M) has been available since 2011. It requires just one type of patch cord<br />
(A-to-B). The crossover of the fibers for duplex signal transmission (10 GBase-SR) takes place in the preassembled<br />
cassette. The connectivity diagram for the trunk cable and patch cord or the light guidance remains the<br />
same all the time, even for parallel transmission (Method B) for setting up 40/100 GbE installations. That means<br />
capacity can be expanded directly in an uncomplicated and inexpensive manner. In addition, the only thing that<br />
has to be done is replace the cassettes with panels.<br />
The table below summarizes the described methods once again:<br />
Polarity<br />
method<br />
MPO/MTP<br />
Cable<br />
MPO module<br />
Duplex patch<br />
cable<br />
TIA-568.C<br />
Standard<br />
(duplex signals)<br />
A<br />
B<br />
C<br />
Type A<br />
Type B<br />
Type C<br />
Type A<br />
(Type A Adapter)<br />
Type B1, Type B2<br />
(Type B Adapter)<br />
Type A<br />
(Type A Adapter)<br />
1 x A-to-B<br />
1 x A-to A<br />
2 x A-to-B<br />
2 x A-to-B<br />
R<br />
Type B<br />
Type R<br />
(Type B Adapter)<br />
2 x A-to-B<br />
TIA-568.C<br />
Standard<br />
(parallel signals)<br />
Polarity<br />
method<br />
MPO/MTP<br />
Cable<br />
A Type A Type A<br />
Adapter plate<br />
MPO/MTP<br />
Cable<br />
1 x Type A<br />
1 x Type B<br />
B Type B Type B 2 x Type B<br />
Polarity, methods and component types<br />
Type A: MPO/MTP Cable<br />
Key-up to key-down<br />
Type B: MPO/MTP Cable<br />
key-up to key-up<br />
Type C: MPO/MTP Cable<br />
Key-up to key-down, flipped pair-wise<br />
Fibers<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
Key-Up<br />
Key-down<br />
Pos. 1<br />
Pos. 1<br />
Pos. 12 Pos. 12<br />
Fibers<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
Fibers<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
Key-Up<br />
Key-down<br />
Pos. 1<br />
Key-up<br />
Key-down<br />
Pos. 12<br />
Pos. 12 Pos. 1<br />
Fibers<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
Fibers<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
Key-Up<br />
Key-down<br />
Pos. 1<br />
Pos. 1<br />
Pos. 12 Pos. 12<br />
Fibers<br />
2<br />
1<br />
4<br />
3<br />
6<br />
5<br />
8<br />
7<br />
10<br />
9<br />
12<br />
11<br />
Duplex patch cable A-to-B<br />
Type A Adapter<br />
Key-up to key-down<br />
Type B Adapter<br />
Key-up to key-up<br />
Duplex patch cable A-to-A<br />
The complete rebuilding of a data center is certainly not an everyday event. When it is done, the operator has the<br />
option of relying immediately on the newest technologies and to lay the groundwork for higher bandwidths.<br />
Gradually converting or expanding existing infrastructure to accommodate 100 Gb/s will be a common occurrence<br />
in the years ahead, indeed it will have to be. A useful approach involves successively replacing first existing<br />
passive components, then active components as they become available and reasonably affordable. The capacity<br />
expansion is usually done in three steps:<br />
Page 132 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The capacity expansion in existing 10G environments<br />
The standards TIA-942, DIN EN 50173-5 and ISO/IEC 24764 provide specifications for network planning in data<br />
centers. The steps below assume a correspondingly well-planned and installed network and merely describe the<br />
migration to 100 GbE.<br />
The first step in the migration from 10 GbE to 40/100 GbE certainly involves the expansion of capacity in an<br />
existing 10 GbE environment. A 12-fiber MPO cable serves as the permanent link (backbone). MPO modules and<br />
patch cords establish the connection to the 10G switches.<br />
Different constellations emerge depending on the legacy infrastructure and the polarity method used.<br />
Methode A<br />
10G, Case 1 - an MPO trunk cable (type A, male-male) serves as the permanent link (middle). MPO modules (type A, female)<br />
provide the transition to the LC Duplex patch cords A-to-B (left) and A-to-A (right).<br />
10G, Case 2 - an MPO trunk cable (type A, male-male) serves as the permanent link (middle). An MPO module (type A,<br />
female) provides the transition to LC Duplex patch cord A-to-B (left), adapter plate (type A) and harness cable (female) as an<br />
alternative to the combination of MPO module and LC Duplex patch cord.<br />
10G, Case 3 - duplex connection without permanent link, consisting of LC Duplex patch cord A-to-B (left), MPO module<br />
(type A, female) and harness cable (male).<br />
Methode R<br />
10G - an MPO trunk cable (type B, male-male) serves as a permanent link (middle), MPO modules (type R, female) provide<br />
the transition to the LC Duplex patch cords A-to-B (left, right).<br />
Important: Harness cables cannot be used with method R!<br />
The capacity expansion from 10G to 40G<br />
If the next step involves replacing the 10G switches with 40G versions, MPO adapter plates can be installed easily<br />
instead of the MPO modules to make the next adaptation.<br />
Here, too, the installer has to keep in mind the polarity method used.<br />
Methode A<br />
Replacement of the MPO modules (type A) with MPO adapter plates (type A) and the LC Duplex patch cords with MPO patch<br />
cords type A, female-female (left) and type B, female-female (right). The permanent link (type A, male-male) remains.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 133 of 156
www.datacenter.rdm.com<br />
Methode B<br />
Replacement of the MPO modules (type R) with MPO adapter plates (type B) and the LC Duplex patch cords with MPO<br />
patch cords type B, female-female (left, right). The permanent link (type B, male-male) remains.<br />
The upgrade from 40G to 100G<br />
Finally, in the last step, 100G switches are installed. This requires the use of 24-fiber MPO cables. The existing<br />
12-fiber connection (permanent link) can either be expanded with the addition of a second 12-fiber connection or<br />
can be replaced with the installation of a 24 fiber connection.<br />
Methode A<br />
Capacity expansion for the MPO trunk cable (type A, male-male) with the addition of a second trunk cable; the MPO adapter<br />
plates (type A) remain unchanged; the MPO patch cords are replaced with Y cables. (left: Y cable female-female type A; right: Y<br />
cable female-female type B)<br />
The MPO-24 solution - using one MPO-24 trunk cable (type A male-male); the MPO adapter plates (type A) remain unchanged.<br />
The patch cords used are MPO-24 patch cords type A, female-female (left) and type B, female-female (right).<br />
Methode B<br />
Capacity expansion for the MPO trunk cable (type B, male-male) with the addition of a second trunk cable; the adapter plates<br />
(type B) remain unchanged; the MPO patch cords are replaced with Y cables. (left, right: Y cable female-female type B)<br />
The MPO-24 solution - using one MPO-24 trunk cable (type B male-male); the MPO adapter plates (type B) remain unchanged.<br />
The patch cords used are MPO-24 patch cords type B, female-female (left, right).<br />
Summary<br />
Planners and managers at data centers face new requirements with the introduction of MPO components and<br />
parallel optical connections. They will have to plan cable lengths carefully, select the right MPO types, keep<br />
polarities in mind across the entire link and calculate attenuation budgets with precision. Short-term changes are<br />
expensive and planning mistakes can be costly.<br />
The changeover is worthwhile anyway, especially since technology will make it mandatory in the medium term. It<br />
therefore makes sense to lay the groundwork early and at least adapt the passive components to meet the<br />
upcoming requirements. Short installation periods, tested and verified quality for each individual component and<br />
reliable operations and investments for years to come are the three big advantages that make the greater effort<br />
and expense more than worthwhile.<br />
Page 134 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.10.3 Power over Ethernet (PoE/PoEplus)<br />
Since its introduction in 2003, PoE has grown into a thriving market and is forecast to continue to grow signifycantly<br />
in the future. Market research group Dell’Oro predicts that in 2011 there will be 100M PoE enabled devices<br />
sold, as well as over 140M PoE ports in sourcing equipment, such as switches (see Figure below).<br />
VDC’s (Venture Development Corporation,<br />
Natick, Massachusetts) market<br />
study, “Power over Ethernet”: Global<br />
Market Opportunity Analysis, discusses<br />
the adoption drivers of leading PoE applications.<br />
IP phones and wireless access points<br />
(WAPs), and the leading vendors of these<br />
devices. According to the research, the<br />
demand for enterprise WAPs will increase<br />
by nearly 50 % per year through 2012.<br />
Initially a maximum power of 12.95 W at<br />
the powered device (PD) was defined by<br />
the standard. Since the introduction of<br />
Power over Ethernet, demand for higher<br />
power grew in order to supply devices<br />
such as:<br />
• IP-cameras with PAN/Tilt/Zoom functions<br />
• POS Terminals<br />
• RFID-Reader, etc.<br />
• VOIP Video phones<br />
• Multiband Wireless Access Points (IEEE 802.11n)<br />
Therefore a new standard was developed with the objective to provide a minimum of 24 W of power at the PD.<br />
• IEEE 802.3af: Power over Ethernet (PoE) = 12.95 W power<br />
• IEEE 802.3at: Power over Ethernet (PoEplus) = average 25.5 W power<br />
The newer standard, IEEE<br />
802.3at (PoEP), was released<br />
in October 2009.<br />
The definition of powered devices<br />
(PD) and power sourcing<br />
equipment (PSE) is the same<br />
as defined by the PoE IEEE<br />
802.3af standard. On the right<br />
side one possible end span<br />
option is pictured.<br />
Application Objectives<br />
The objectives of the IEEE standards group were as follows:<br />
1. PoEplus will enhance 802.3af and work within its defined framework.<br />
2. The target infrastructure for PoEplus will be ISO/IEC 11801-1995 Class D / ANSI/TIA/EIA-568.B-2 category 5<br />
(or better) systems with a DC loop resistance no greater than 25 ohms.<br />
3. IEEE STD 802.3 will continue to comply with the limited power source and SELV requirements as defined in<br />
ISO/IEC 60950.<br />
4. The PoEplus power sourcing equipment (PSE) will operate in modes compatible with the existing requirements<br />
of IEEE STD 802.3af as well as enhanced modes (Table as follow).<br />
5. PoEplus will support a minimum of 24 Watts of power at the Powered Device (PD).<br />
6. PoEplus PDs, which require a PoEPlus PSE, will provide the user with an active indication when connected<br />
to a legacy 802.3af PSE. This indication is in addition to any optional management indication that may be<br />
provided.<br />
7. The standard will not preclude the ability to meet FCC / CISPR / EN Class A, Class B, Performance Criteria<br />
A and B with data for all supported PHYs.<br />
8. Extend power classification to support PoEplus modes.<br />
9. Support the operation of midspan PSEs for 1000BASE-T.<br />
10. PoEplus PDs within the power range of 802.3af will work properly with 802.3af PSEs.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 135 of 156
www.datacenter.rdm.com<br />
PD Operation based on PSE Version<br />
PoE PSE<br />
PoEplus PSE<br />
PoE-PD Operates Operates<br />
PoEplus-PD < 12,95 W Operates Operates Note<br />
PoEplus-PD > 12,95 W<br />
PD will provide user<br />
activity indication<br />
PD = Powered Devices, PSE = Power Sourcing Equipment<br />
Note: Operates with extended power classification<br />
Operates Note<br />
Compatibility between PoE and PoEplus versions of the PD and PSE<br />
The initial goal was to provide 30 W<br />
of power at the PD over two pairs,<br />
but this was revised down to an<br />
average power of 25.5 W.<br />
Also, the goal of doubling this<br />
power by transmitting over 4 pairs<br />
was removed, but could be revisited<br />
in a new version at a later<br />
date.<br />
Great care was taken to be backward compatible and continue support for legacy PoE, or low power devices. The<br />
following terminology was thus introduced to differentiate between low power and the new higher power devices:<br />
• Type 1: low power<br />
• Type 2: high power<br />
The Table on the right side summarizes<br />
some of the differences between PoE and<br />
PoEplus.<br />
With the higher currents flowing through<br />
the cabling, heating was an important consideration.<br />
Some vendors have recommended<br />
using higher category cabling to reduce<br />
these effects. Below we look at this issue<br />
in detail.<br />
PoE<br />
PoEplus<br />
Cable requirement Cat. 3<br />
or better<br />
Type 1: Cat. 3 or better<br />
Type 2: Cat. 5 or better<br />
PSE current (A) 0,35 A Type 1: 0,35 A<br />
Type 2: 0,6 A<br />
PSE voltage (Vdc) 44-57 Vdc Type 1: 44-57 Vdc<br />
Type 2: 50-57 Vdc<br />
PD current (A) 0,35 A Type 1: 0,35 A<br />
Type 2: 0,6 A<br />
PD voltage (Vdc) 37-57 Vdc Type 1: 37-57 Vdc<br />
Type 1: 47-57 Vdc<br />
Differences between PoE and PoEplus / Source: Ethernet Alliance, 8/2008<br />
Heat Considerations for Cabling<br />
The transmission of power over generic cabling will result in some temperature rise in<br />
the cabling depending on the amount of power transferred and the conductor size. The<br />
cable in the middle of a bundle will naturally be somewhat warmer due to the inability of<br />
heat to dissipate. As heat increases in the cable (ambient + temperature increase), insertion<br />
loss also increases which can reduce the maximum allowed cable length.<br />
In addition, the maximum temperature (ambient + increase) is limited to 60°C according to<br />
standards.<br />
Therefore, two limiting factors are given:<br />
• Reduction of the maximum allowed cable<br />
length due to higher cable insertion loss from<br />
higher temperatures<br />
• The maximum specified temperature of 60°C<br />
given in the standard<br />
The temperature rise for various cable types was<br />
determined through tests performed by the IEEE<br />
802.3at PoEplus working group (measured in<br />
bundles of 100 cables, Table on the right).<br />
Cable Type Profile Approx. Temp. Rise<br />
Cat. 5e / u AWG 24 10° C<br />
Cat. 5e / s AWG 24 8° C<br />
Cat. 6 / u AWG 24+ 8° C<br />
Cat. 6A / u AWG 23 6° C<br />
Cat. 6A / s AWG 23 5° C<br />
Cat. 7 AWG 22 4° C<br />
Temperature rise operating PoEplus<br />
Investigations showed, the use of AWG23 and AWG22 cable for PoEplus is not absolutely necessary at room<br />
temperature. The problem of increased cable temperatures with PoEplus should be considered with long cable<br />
lengths, or long patch cords, and with high ambient temperatures such as those found in tropical environments.<br />
With an unshielded Cat. 5e/u cable, an additional temperature increase of 10°C from PoEplus with an ambi ent<br />
temperature of 40°C would mean a reduction in the a llowed permanent link length of approximately 7 m. This<br />
reduction with a shielded Cat 5e/s cable and an ambient temperature of 40°C would only be approximatel y 1 m.<br />
This 7 m or 1 m reduction in the link length could be compensated with a higher category cable with a larger wire<br />
diameter. However, a careful review of the cost benefit relationship of such a solution is recommended. It should<br />
also be considered that the length restrictions for class E and F are much more severe than those for PoEplus and<br />
may limit the applicable permanent link length.<br />
In any case, when planning an installation for PoEplus, extra care must be taken to consider the consequences of<br />
heat dissipation, both in the cable and ambient, regardless of which cable is used.<br />
Page 136 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Connector Considerations<br />
R&M investigated the effects of PoE on the connector, specifically the damage from sparking that may be caused<br />
by unmating a connection under power. In addition, R&M co-authored a technical report on this subject that will be<br />
published by IEC SC48B “The effects of engaging and separating under electrical load on connector interfaces<br />
used in Power-over-Ethernet (PoE) applications”.<br />
In this paper, the concept of nominal contact area was introduced. During the mating operation, the point of contact<br />
between A (plug) and B (module) moves along the surface of the contacts from a point of first contact (the<br />
connect/disconnect area) to the point of final rest (the nominal contact area). These two areas are separated by<br />
the wiping zone (figure below).<br />
Illustration of the nominal contact area concept<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 137 of 156
www.datacenter.rdm.com<br />
The investigations showed that traditionally the design of modular connectors described in the IEC 60603 standards<br />
ensures that the zone where contact is broken and sparking can occur is separate from the zone where contact<br />
between plug and jack is made during normal operation (the nominal contact area).<br />
The picture on the left illustrates the case of a good contact design where damage does not affect the contact<br />
zone. The picture on the right shows a bad contact design where the contacts are folded and the damage is in the<br />
contact zone (overlap of nominal contact area and connect/disconnect area).<br />
Good contact design (R&M module)<br />
Photos: R&M<br />
Bad contact design due to overlapping of nominal contact<br />
and connect/disconnect areas<br />
Nominal contact area<br />
Coonect/disconnect area<br />
The increased power of PoEplus may cause a larger spark upon disconnection which will aggravate this problem.<br />
In addition, with new Category 6A, 6 A, 7 and 7 A connecting hardware, the contact design may deviate significantly<br />
from more traditional designs and thus be affected by the electrical discharges.<br />
Unfortunately, the standards bodies have not yet fully addressed this concern. Test methods and specifications<br />
have not been finalised to ensure that connecting hardware will meet the demands of PoEplus.<br />
Efforts to date in both IEEE and ISO/IEC bodies have focused mainly on establishing the limits in terms of cable<br />
heating. Until the connecting hardware is also addressed, a guarantee of PoEplus support for a cabling system is<br />
premature.<br />
R&M will continue to push for resolution of this issue in the appropriate cabling standards bodies and will inform<br />
the customers as new information becomes available.<br />
Conclusions<br />
The success of PoE to date and the demand for PoEplus indicate that this technology has met a need in the market<br />
that will continue to grow and expand. There are many issues to take into account when implementing this technology<br />
including power sourcing and backup in the rack and dealing with the additional heat produced there by the<br />
use of PoEplus-capable switches.<br />
The cabling system also needs to be carefully considered. Much effort has been invested in looking at the effects<br />
of increased heat in the cabling. As we have seen, the combination of high ambient heat and the effects of<br />
PoEplus can lead to length limitations for all types of cabling. Customers are therefore encouraged to choose the<br />
appropriate cabling for their application and requirements after reviewing the specifications and guidelines provided.<br />
The same amount of effort needs to be invested into looking at the effects of unmating connecting hardware under<br />
power. Unfortunately, to date, this work has not been undertaken by the standards bodies and thus no specifications<br />
or test methods currently exist to ensure compatibility with the upcoming PoEplus standard.<br />
Page 138 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.10.4 Short Links<br />
The standard ISO/IEC 11801 Amendment 2 specifies that Class E A can only be fulfilled with Cat. 6 A components<br />
when the minimum length of 15 m for the permanent link (PL) is observed. The use of shorter PLs is allowed when<br />
the manufacturer guarantees that the PL requirements are met.<br />
The wish for shorter links and the question of headroom<br />
Today, many users tend to shorten the links in order to save material, energy and costs. But what effect does this<br />
have on the performance and transmission reliability of the network<br />
Shortening the distance between the two modules in a 2-connector permanent link results in a lower attenuation of<br />
the cable between the modules, which in turn increases the interfering effect of the further module.<br />
The standard, however, lists only one module in its calculation, which is problematic: Particularly, the limit values<br />
for NEXT and RL (return loss) can not be maintained any longer. This is why the standard defines a minimum link<br />
length of 15 m.<br />
What happens if a high quality module is paired with a high-quality cable The result is a PL with much headroom<br />
in terms of length in the range between 15 m to 90 m. Using the R&M Advanced System for example, ensures a<br />
typical NEXT headroom of 6 dB with 15 m and 11 dB with 90 m (see Figure below).<br />
NEXT headroom of permanent links, measured in an installation using the new Cat. 6 A module of R&M.<br />
The zero line shows the limit value in acc. with ISO/IEC 11801 Amendment 2.<br />
In the R&M freenet warranty program, R&M guarantees a NEXT headroom of 4 dB for a permanent link of above<br />
15 m, built with components from the R&M Advanced System. Alternatively, the high headroom provides customers<br />
with the possibility to set up very short PLs, which fulfill all the requirements of the standard ISO/IEC 11801<br />
Amendment 2.<br />
The advantage of additional headroom<br />
Basically, it makes sense to lay permanent links that are exactly as long as is needed. However, that would mean<br />
that the links would often be shorter than the minimum length of 15 m. When components are used that only meet<br />
the standard requirements, the permanent links must be artificially extended with loops to 15 m to make sure that<br />
the limit values are reached. This not only entails additional costs but these cable loops also take up space in the<br />
cable conduits and interfere with the ventilation, which results in an increase in infrastructure costs and energy<br />
consumption.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 139 of 156
www.datacenter.rdm.com<br />
Thanks to the large headroom with standard lengths, the use of R&M's Cat. 6 A module allows a shortening of the<br />
minimum length as far as down to 2 m. This is sufficient to cover the customarily used lengths in data centers.<br />
required<br />
excess length<br />
required<br />
excess length<br />
On average, the cables between individual server cabinets and the network cabinet need to be 4 to 5 meters of<br />
length, if they are routed from above over a cable guidance system, or 7 to 8 meters if they are routed from below,<br />
through the raised floor. The loops to extend the cables to 15 m are not required when the Class E A/Cat. 6 A solution<br />
developed by R&M is used.<br />
How to measure short permanent links<br />
According to ISO/IEC 11801 / 2 (Table A.5), less stringent requirements are applicable for short 2-connector permanent<br />
links with an insertion loss of less than 12 dB, from 450 MHz on up. Even if the range in question is 450 MHz<br />
and 500 MHz, the field measuring device should always be set to "Low IL". The maximum lowering of the NEXT<br />
requirement is 1.4 dB. Figures below show an example of a measuring arrangement and the results.<br />
Testing of short permanent link (configuration<br />
PL2) with the Fluke DTX 1800 Cable Analyzer.<br />
Test result. The permanent link of 2.1 m in length meets all requirements of<br />
ISO/IEC 11801.<br />
Page 140 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
NEXT<br />
measuring of<br />
a permanent<br />
link of 2 m in<br />
length, set up<br />
with the R&M<br />
components.<br />
RL<br />
measurement<br />
of the same<br />
permanent link<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 141 of 156
www.datacenter.rdm.com<br />
The permanent link model of ISO/IEC 11801 Amendment 2<br />
ISO/IEC distinguishes between four configurations<br />
of the permanent link (Figure on right), relevant for<br />
network planning. The determining parameters for<br />
transmission quality are NEXT (near-end crosstalk)<br />
and RL (return loss).<br />
ISO/IEC calculates the limit values of NEXT of all<br />
four configurations [in dB] for the frequency range<br />
1 = f = 300 [MHz] with this formula:<br />
⎛<br />
− 20lg⎜<br />
10<br />
⎝<br />
74,3−15lg(<br />
f )<br />
−20<br />
+ 10<br />
94−20 lg( f )<br />
−20<br />
⎞<br />
⎟<br />
⎠<br />
[1]<br />
The two terms in the bracket refer to the effect of<br />
the module at the near end and the effect of the<br />
cable. ISO/IEC specifically states that they should<br />
not be taken as individual limit values. This means<br />
that the overall limit value can be reached with<br />
very good cables and lower quality modules or with<br />
very good modules and lower quality cables.<br />
Different formulas are applicable to the frequency<br />
range 300 = f = 500 [MHz]. The ISO/IEC calculates<br />
the NEXT limit values for configuration PL2 [in dB]<br />
like this:<br />
87,46<br />
21,57lg( f )<br />
− [2] (blue line in Figure)<br />
Permanent link configurations acc. to ISO/IEC 11801 Amendment 2<br />
36<br />
A moderate requirement applies to configuration<br />
PL3 with additional consolidation point and<br />
frequency range 300 = f = 500 [MHz]; it is<br />
calculated with this formula:<br />
102,22<br />
27,54lg( f )<br />
− [3] (violet line in Figure)<br />
NEXT [dB]<br />
34<br />
32<br />
30<br />
PL2<br />
PL3<br />
Short Link<br />
Less stringent requirements are applicable to the<br />
configurations PL1, PL2 and CP2 in the frequency<br />
range above 450 MHz: If the insertion loss (IL) at<br />
450 MHz is lower than 12 dB, the required NEXT<br />
curve can be lowered, calculated with the following<br />
formula [2]:<br />
1,4((<br />
− 450) /50)<br />
f [4] (yellow line in Figure).<br />
28<br />
26<br />
250 300 350 400 450 500<br />
Frequency [MHz]<br />
NEXT limit values for the permanent link configurations 2 and 3<br />
and for short links according to ISO/IEC 11801 Amendment 2.<br />
Conclusion<br />
Short links in structured cabling are not only possible but – combined with high-quality Cat. 6A components – they<br />
are even recommended. Cat. 6 A modules and installation cables from R&M allow the setting up of permanent links<br />
with advantages that impact positively on the cost factor:<br />
• easier installation • no unnecessary cable loops • material savings of up to two thirds<br />
• improved ventilation, lower energy consumption<br />
• reduced costs of installation and operations<br />
In R&M freenet warranty program, R&M guarantees a NEXT headroom of 4 dB for a permanent link of above 15 m,<br />
built with components from the R&M Advanced System. Alternatively, the high headroom provides customers with<br />
the possibility to set up very short PLs, which fulfill all the requirements of the standard ISO/IEC 11801 / 2.<br />
Page 142 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.10.5 Transmission Capacities of Class E A and F A Cabling Systems<br />
Some cabling manufacturers frequently cite 40GBase-T as the killer application for Class F A and Cat. 7 A components.<br />
Other applications to legitimize Class FA are not in sight. There were various earlier discussions on broadcasting<br />
cable TV over twisted pair (CATV over TP) or on using one cable for several applications (cable sharing).<br />
Lower classes and RJ45 connector systems can demonstrably handle these applications.<br />
The standardization committees also failed to require more stringent coupling attenuation for higher classes, so<br />
EMC protection cannot be used as an argument either. When Cat. 6 was defined, there was no application for it<br />
either, but Cat. 6 ultimately paved the way for the development of 10GBase-T. But compared to then, there is so<br />
much more knowledge now about the capabilities and limitations of active transmission technology than back<br />
then. 10GBase-T is approaching the theoretical transmission capacity of Class E A as no other application before.<br />
In order for 10GBase-T to be able to run on Class E A cabling, the application must have active echo and crosstalk<br />
cancelation. For this, a test pattern is sent over the channel during set-up and the effects on the other pairs are<br />
stored. In operation, the stored unwanted signals (noise) are used as correction factors in digital signal processing<br />
[DSP] and subtracted from the actual signal. The potential improvements that are achieved in 10GBase-T are 55<br />
dB for RL, 40 dB for NEXT and 25 dB for FEXT. Adjoining channels are not connected or synchronized, so no<br />
DSP can be conducted. Active noise cancelation for crosstalk is therefore impossible from one channel to the next<br />
(alien crosstalk).<br />
Signal and noise spectrum of 10GBase-T<br />
The data stream is split up among the four pairs for 10GBase-T and then modulated by a pseudo random code<br />
(Tomlinson-Harashima Precoding [THP]) to obtain even spectral distribution of power independent of the data.<br />
Using a Fourier transformation, one can calculate the power spectrum density [PSD] of the application from the<br />
time based signal that is similar to PAM-16. IEEE specified this spectral distribution of power over the frequency in<br />
the standard for 10GBase-T. That is important in this context because all cabling parameters are defined as a<br />
function of the frequency and their influence on the spectrum can thus be calculated.<br />
If attenuation is subtracted from the transmission spectrum, for example, the spectrum for the receiving signal can<br />
be calculated. The parameters RL, NEXT, FEXT, ANEXT, AFEXT etc. can be subtracted to obtain the spectrum of<br />
the various noise components. With active noise cancellation from DSP taken into account, one can calculate the<br />
actual signal and noise distribution at the receivers. The figure below contains a diagram showing these interactions.<br />
Theoretical procedure for determining the spectral distribution of signal and noise<br />
With this PSD the relative magnitudes of signal and different noise source contributions can be compared with<br />
each other. That is what makes this procedure particularly interesting. The next figure shows the signal spectrum,<br />
the total noise spectrum and the different components of noise for an unscreened 100-m Class E A channel. IL and<br />
RL serve as examples to show how the cabling parameters are subtracted from the transmission spectrum. In the<br />
case of RL, the influence of active noise cancelation is also taken into account.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 143 of 156
www.datacenter.rdm.com<br />
The intersection between signal strength and noise strength often serves in cabling technology to define the bandwidth.<br />
If no active noise cancelation is done, return loss is the parameter which is defining the noise power. In this<br />
case, the bandwidth is around 50 MHz. In order to ensures the bandwidth of 66 MHz needed for the 1-GBit<br />
Ethernet 1000Base-T active noise cancelation of return loss is needed already at that level.<br />
Using the active noise cancelation realized by 10GBase-T, bandwidth improves to 450 MHz (see Figure).<br />
Comparison of spectral noise<br />
components<br />
If one compares the different noise parameters with each other, it is striking that alien crosstalk contributes the<br />
most to noise in Class E A as it is defined today. With modern data transmission featuring active noise cancelation,<br />
this means the achievable bandwidth is limited by alien crosstalk which cannot be improved electronically. One<br />
can only increase bandwidth by improving alien crosstalk, i.e. ANEXT [Alien NEXT] and AFEXT [Alien FEXT].<br />
The cable jacket diameter required to meet the specification for alien crosstalk in unscreened cabling already hits<br />
users' limits of acceptance. In order to increase the bandwidths further, one has no choice but to switch to<br />
screened systems. With the effects of additional screening, alien crosstalk becomes so slight in these systems<br />
that it can be ignored for these considerations for the time being.<br />
Return loss [RL] is the biggest source of internal noise in cabling, causing 61% of the total. It is followed by FEXT,<br />
which causes 27%, and NEXT, which generates 12%. Unfortunately, RL is precisely one of the parameters whose<br />
full potential is already nearly reached. Improvements are no longer so easy to achieve. This situation is underscored<br />
by the fact that RL is defined exactly the same for the Classes E A and F A. That means the biggest noise<br />
component remains unchanged when one switches from Class E A to Class F A.<br />
Transmission capacity of different cabling classes<br />
C.E. Shannon was one of the scientists who laid the mathematical foundations for digital signal transmission in the<br />
1940s. The Shannon capacity is a fundamental law in information theory based on the physical limitations of the<br />
transmission channel (entropy). It defines the maximum rate at which data can be sent over a given transmission<br />
channel. The Shannon capacity cannot be exceeded by any technical means.<br />
According to Shannon, the maximum channel capacity can be calculated as follows:<br />
KS = XT * B * log 2 (1 + S/N) [Bit/s]<br />
Abbreviations:<br />
KS = Channel capacity (according to Shannon)<br />
XT = Factor dependent on channel (dependent on media used, modulation methods and other factors; ranges from 0 to 1)<br />
B = Bandwidth of signal used (3 dB points)<br />
S = Received signal power in W<br />
N = Received noise power in W<br />
The higher the bandwidth of the signal and the transmission power used and the smaller the noise level, the<br />
higher the possible data transmission rate. The noise level is usually derived from the transmission power in a<br />
fixed ratio in cabling, e.g. as with RL. In this case, the signal-to-noise ratio cannot be improved by increasing the<br />
transmission power.<br />
Page 144 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
One can now calculate the signal and noise power from the power density spectra of received signal and total<br />
noise. The integral of the area under the signal spectrum corresponds to the signal power S and the one under the<br />
noise spectrum to the noise power N. S and N can now be applied directly in the Shannon formula.<br />
The interesting thing about calculating S and N from the power density spectrum is that one can change the individual<br />
cabling parameters independently of each other and investigate their influence on channel capacity. That<br />
means the Shannon capacities can be calculated for the different classes of cabling.<br />
Comparison of the Shannon capacities of different cabling channels at 400 MHz<br />
The Shannon capacities of different cabling configurations are compared in Figure. This comparison shows which<br />
changes will lead to a substantial increase in channel capacity and that are therefore cost effective and yield<br />
actual added value.<br />
The first line in the picture corresponds to an unscreened Class E A channel in accordance with the standard. This<br />
line serves as the reference for comparison with other configurations. The switch from unscreened to screened<br />
cabling improves alien crosstalk, thereby increasing the channel capacity by 14%. The switch from a Cat. 6 A cable<br />
to a Cat. 7 A cable mainly reduces attenuation and thus increases the signal level on the receiver side. The<br />
channel capacity increases by a further 5% in the process, to 119%. If one also replaces the RJ45 Cat. 6 A jacks<br />
with Cat. 7 A jacks, the channel capacity rises by only 1%. The explanation for this minimal increase is that the new<br />
jacks improves mainly NEXT and FEXT but these values are already sufficiently low due to the active noise<br />
cancelation.<br />
Outlook for possible 40GBase-T over 100 meters<br />
IEEE specified that it needs a channel capacity of 20 Gbit/s to realize 10GBase-T. That corresponds well to the<br />
21.7 Gbit/s capacity achieved with the Class E A channel at 400 MHz. One can therefore assume that 40GBase-T<br />
requires a channel capacity of 80 Gbit/s.<br />
Using the power density spectrum, one can now investigate what the ideal situation would be for signal and noise.<br />
The signal spectrum has maximum strength if one uses a cable with a maximum wire diameter. The considerations<br />
below are based on an AWG-22 cable, assuming customers would not accept cables with larger diameters for<br />
reasons related to handling and space consumption.<br />
The lowest possible noise level achievable is limited by thermal noise. This noise is produced by the copper atoms<br />
in the cable that are moving due to their temperature. It equals -167 dBm/Hz at room temperature. Considering, it<br />
is practically infeasible to cool cabling artificially, e.g. with liquid nitrogen, this noise level cannot be reduced<br />
further.<br />
To keep total noise in the range of non-reducible thermal noise, one must reduce all noise factors to the point<br />
where they are also under the level of thermal noise even in a worst case scenario. For Class FA cabling, that is<br />
true if the following basic conditions are met:<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 145 of 156
www.datacenter.rdm.com<br />
Alien crosstalk:<br />
Active noise cancellation:<br />
(values in parentheses: 10GBase-T)<br />
PSANEXT = 30 dB higher than F A level<br />
PSAFEXT = 25 dB higher than F A level<br />
RL = 90 dB (55 dB)<br />
NEXT = 50 dB (40 dB)<br />
FEXT = 35 dB (25 dB)<br />
One can now calculate the Shannon capacity for this hypothetical cabling system based on the above assumptions<br />
(refer to Figure). One achieves a channel capacity of 80 Gbit/s at a bandwidth of around 1 GHz. For a PAM signal,<br />
one achieves a bandwidth of 1000 MHz with a symbol rate of 2000 MSymbols/s or 2 GBd (Baud). To achieve a<br />
throughput of 40 Gbit/s with this symbol rate per pair, one must have 5 bits per symbol. That corresponds to PAM-<br />
32 coding.<br />
Channel capacity according to Shannon for<br />
the proposed 40GBase-T channel over 100 m<br />
The channel bandwidth is approximately 1.4 GHz for the conditions mentioned above. That means the portions of<br />
the signal above 1.4 GHz that arrive at the receiver are less than thermal noise. At 1 GHz, the signal-to-noise ratio<br />
amounts to 14 dB. This should allow a 40GBase-T protocol to be operated. On the other hand, the Shannon<br />
capacity at 1.4 GHz is just around 110 Gbit/s which is not sufficient for the operation of a 100GBase-T.<br />
For 100-m cabling, one can conclude that it is physically impossible to develop a 100GBase-T. By contrast,<br />
40GBase-T appears to be technically feasible albeit extremely challenging.<br />
• 90 dB RL compensation >>> A/D converter with better resolution (+6 bits)<br />
• Clock rate 2.5 times higher than for 10GBase-T<br />
• Substantial heat generation >>> reduced port density in active equipment<br />
• Extremely low signal level >>> EMC protection needed<br />
The cabling attenuation exceeds 67 dB at 1000 MHz. In addition, the available voltage range with PAM 32 is<br />
subdivided into 32 levels. That means the distance from one level to the next is just 0.03 mV. That is more than 20<br />
times less than the distance in 10GBase-T. One must therefore also improve the protection from outside noise<br />
accordingly.<br />
Alternative approaches for 40GBase-T<br />
Many experts think the requirements for a 40G protocol over 100 m are too challenging for use under field conditions.<br />
Moreover, most experts consider the main use of 40GBase-T to be in data centers to connect servers in the<br />
access area and not in classic structured horizontal cabling. The cabling lengths required in the DC are much<br />
shorter and a reduced transmission distance is certainly reasonable.<br />
For instance, if the PL is fixed at 45 m max and the patch cable lengths at a total of 10 m, attenuation falls to the<br />
point that there is the same signal distance at 1 GHz and PAM 32 from level to level as there is now with<br />
10GBase-T (417 MHz and PAM 16). That seems to be a good compromise between achievable cable length and<br />
noise resistance and is a good basis for further considerations. One positive side effect of a reduced attenuation is<br />
that somewhat more noise can be allowed and still have the same signal-to-noise ratio:<br />
Page 146 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Alien crosstalk:<br />
Active noise cancellation:<br />
(values in parentheses: 10GBase-T)<br />
PSANEXT = 25 dB higher than F A level<br />
PSAFEXT = 25 dB higher than F A level<br />
RL = 75 dB (55 dB)<br />
NEXT = 45 dB (40 dB)<br />
FEXT = 30 dB (25 dB)<br />
A Shannon capacity of 86 Gbit/s is achieved based on the above assumptions.<br />
Power density spectrum of<br />
40GBase-T cabling shortened<br />
to the greatest possible extent<br />
Since one must increase alien crosstalk for the Class F A channel by an additional 25 dB to achieve the required<br />
channel capacity, this means the Class F A channel as it is defined today will not suffice to support a 40-Gbit/s<br />
connection. Also an active noise cancellation of 75 dB for RL will be required. To achieve this level, one will<br />
probably have to use a higher resolution for the A/D converter than was the case with 10GBase-T. This higher<br />
resolution will also probably improve NEXT and FEXT to a similar extent and create a certain reserve. With this<br />
reserve the use of RJ45 Cat. 6 A jacks could not be completely ruled out.<br />
By applying the parameters as postulated in the power density spectrum and then calculating the Shannon<br />
capacity, one can now compare the suitability of the different types of cabling for the 40GBase-T channel.<br />
Comparison of channel cap. at 1 GHz for 45m PL and active noise cancellation of RL=75dB, PSNEXT=60dB & PSFEXT=45dB<br />
Class E A is only defined up to 500 MHz. The curves for the parameters between 500 MHz and 1 GHz was linearly<br />
extrapolated for these calculations and are thus debatable. The figure does clearly show, however, that Classes<br />
E A through F A cabling do not suffice to achieve the required channel capacity for 40GBase-T. Alien crosstalk<br />
(ANEXT and AFEXT) is the limiting factor for these cabling classes. By improving alien crosstalk by 25 dB, one<br />
could achieve the required channel capacity for 40GBase-T, regardless of whether one uses a Cat. 7 A or Cat. 6 A<br />
connector system.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 147 of 156
www.datacenter.rdm.com<br />
The Cat. 7 A cable has to be newly specified with better alien crosstalk characteristics in order to improve this parameter.<br />
To illustrate this difference, this newly specified cable is referred to here as Cat. 7 B. That designation is not<br />
based on any standard, however.<br />
The key to future proof the cabling therefore lies in the specification of a new Cat. 7 B cable and not necessarily in<br />
the use of a Cat. 7 A connector system.<br />
Summary<br />
• 10GBase-T up to 100 m appears to be the limit of what is feasible for unscreened cabling.<br />
• 40GBase-T up to 100 m is technically feasible but appears to be too challenging for field use. Presumably<br />
it will be defined with reduced cable length (e.g. with 45 m PL).<br />
• 100GBase-T up to 100 m appears to be technically unfeasible.<br />
• Today's Class F A standardization does not support 40GBase-T because the specification for alien<br />
crosstalk is not stringent enough >>> a new Class F B will be needed.<br />
• It has not yet been shown that a Cat. 7 A connector system is necessary to achieve 40GBase-T.<br />
Given the uncertainties about connector system requirements, one should wait for clarification in the standardization<br />
before using Cat. 7 A connecting hardware. Cat. 7 A cables with improved alien crosstalk (Cat. 7 B) can be<br />
used to be 40GBase-T ready.<br />
3.10.6 EMC Behavior in Shielded and Unshielded Cabling Systems<br />
The motivation to once again investigate the topic of the EMC behavior of shielded and unshielded cabling, which<br />
has been examined a number of times already, is the introduction of the new application 10GBase-T.<br />
The use of ever higher order of modulation may well reduce the necessary bandwidth, but it makes the new protocols<br />
more and more susceptible to external interference. Here it is worth noting, that the separation between<br />
symbols with 10GBase-T is approximately 100x smaller than with 1000Base-T (see Figure below).<br />
Comparison of the signal strength of various Ethernet protocols at the receiver<br />
The figure shows the relative levels of signal strength at the receiver for various protocols after the signal has<br />
been transmitted through 100 m of Class E A cabling. As a general principle, the higher the signal’s frequency or<br />
bandwidth, the larger the attenuation through the cabling is. For reasons concerning EMC, the level of the output<br />
signal is not increased above +/- 1V. Of this 2V range, depending on the frequency and attenuation in the cabling,<br />
an ever smaller part reaches the receiver. In addition, the voltage differences from one symbol to the next get<br />
increasingly smaller as a result of higher modulations. While in the past the separation between symbols at the<br />
receiver has decreased by roughly a factor of three for each step from 10M to 100M to 1G, in the final step in<br />
development from 1G to 10G it has decreased by a factor of 100.<br />
Page 148 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
If for visualization some noise is added, it becomes obvious that at faster data-transmission rates, the sensitivity to<br />
interference increases tremendously. As a result, the sensitivity to interference in the EMC area gets increasingly<br />
higher as the data-transmission rate increases.<br />
During 2008, a group of cable suppliers, including R&M, came together to compare the EMC behavior of various<br />
cabling types in a neutral environment. To ensure the independence and objectivity of the results, a 3rd party test<br />
laboratory was commissioned with the investigation, specifically the “Gesellschaft für Hochfrequenz Messtechnik<br />
GHMT” (“Organization for High Frequency Testing”) located in the German town of Bexbach. The goal of the<br />
study was to answer the following questions concerning the selection of the cabling system:<br />
• Which parameters must be estimated in order to draw meaningful conclusions about the EMC behavior<br />
of shielded and unshielded cabling systems<br />
• Which special measures are necessary when using shielded or unshielded cabling in order to ensure<br />
operation that conforms to legal requirements and standards<br />
• How does the EMC behavior of shielded and unshielded cabling compare during the operation of 1G and<br />
10GBase-T<br />
The basic idea was not to pit shielded and unshielded systems against each other, but rather to clearly indicate<br />
the basic conditions that must be adhered to in order to ensure problem-free operation of 10GBase-T that is in<br />
conformance with legal requirements. By doing so, assistance can be offered to planners and end customers so<br />
they can avoid confusion and uncertainty when selecting a cabling system.<br />
The results presented are exclusively those from the independent EMC study carried out by GHMT; the interpretation<br />
of the results and conclusions drawn are, however, based on analyses by R&M.<br />
Equipment under test and test set-up<br />
The examination of six cabling systems was planned:<br />
• 1x unshielded Cat 6<br />
• 2x unshielded Cat 6A<br />
• 3x shielded Cat 6A with shielded twisted pairs (foil) and different degrees of coverage of the braided<br />
shield (U-FTP without braided shield, S-STP “light” with moderate braided shield, S-STP with good<br />
braided shield).<br />
The Figure below shows the results from the antecedent measurements according to ISO/IEC 11801 (2008-04).<br />
The values given are the margin vs. the Class E A limits, with the exception of coupling attenuation, which is given<br />
as an absolute value.<br />
Cable parameters in accordance with ISO/IEC 11801 (2008-04)<br />
Surprisingly, the transmission parameters (IL, NEXT, PS NEXT, TCL and RL) of the Cat 6A UTP systems are at a<br />
similar level as those from the shielded systems. Except with the older Cat 6 system, the values are more or less<br />
comparable to each other. Noticeable is that the PS ANEXT requirements are barely reached, if at all, by the<br />
unshielded systems. As soon as shielding is present, the ANEXT values are improved 30 to 40 dB so much that<br />
they no longer present a problem.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 149 of 156
www.datacenter.rdm.com<br />
With the “EMC” parameter “coupling attenuation” the difference between the systems becomes very clear. The<br />
Cat 6 system deviates widely from the necessary values, while the newer UTP systems meet the requirements.<br />
Here too, there is a clear difference between the shielded and unshielded cabling systems. In international standardization,<br />
TCL is used as the EMC parameter in unshielded systems instead of the coupling attenuation that is<br />
used with shielded systems. The comparison of values for TCL and coupling attenuation of Systems 0 – 2,<br />
however, puts this practice into question. That is because the relatively small difference in TCL between System 0<br />
and 1 or 2 makes a huge difference in the coupling attenuation (Pass for TCL, Fail for coupling attenuation).<br />
Based on these poor results, no EMC measurements were carried out with System 0 for the remainder of the study.<br />
As is common with all EMC investigations, cabling systems cannot be examined for EMC aspects on their own,<br />
but only in connection with the active components as a total system. The following active components were used:<br />
• Switch: Extreme Networks; Summit X450a-24t (Slot: XGM2-2bt)<br />
• Server: IBM; X3550 (Intel 10gigabit AT server adapter)<br />
Both servers, which created data traffic of 10 Gbit/s, were connected to a switch through the cabling. The figure<br />
shows a diagram of the test set-up.<br />
Functional diagram of the test set-up<br />
In order to be able to test the EMC properties from all sides, all data-transmission components for the test were<br />
installed on a turntable (d = 2.5 m) in an anechoic EMC test chamber. An anechoic test chamber is a room that<br />
prevents electric or acoustic reflections by means of absorbing layers on the walls, and it shields the experiment<br />
from external influences. A 19” rack containing the switches and the servers was placed in the middle of the<br />
turntable, with an open 19” rack with patch panels on both sides. The panel and active components were<br />
connected with 2 m patch cords. Each end of the tested 90 m installed cables was terminated to one of the RJ45<br />
panels. To achieve a high degree of reproducibility and to have a well-known test set-up, the cables were fixed to<br />
a wooden cablerack that was suggested as a test set-up by CENELEC TC 46X WG3. The rack was located behind<br />
the 19” racks.<br />
Basically, this is a system set-up that simulates the situation in a data center.<br />
Radiated power in accordance with EN 55022<br />
This test is mandatory in the EU’s EMC directive and thus has the force of law. The operation of equipment and<br />
systems that do not pass this test is forbidden in the EU.<br />
The test is designed to recognize radiated interference from the tested system that could impair the operation of<br />
receiving equipment such as broadcast radios, TVs and telecommunication equipment.<br />
Page 150 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
The test system’s receiving antenna is erected at a distance of 3 m from the equipment under test, and the radiated<br />
power is logged in both horizontal and vertical polarization. There are two limits: Class A (for office environments)<br />
with somewhat less stringent requirements, and Class B (for residental areas) with correspondingly tougher requirements.<br />
Emission measurements for a typical shielded and unshielded system Limits: Red: Class B Violet: Class A<br />
As an example the figure above compares the emission measurements of System 2 and System 5 for a 10GBase-T<br />
transmission. It is obvious that the unshielded system exceeds the limits for Class B several times while the shielded<br />
system provides better protection, especially in the upper frequency range, and thus meets the requirements.<br />
Here it makes no difference whether the shielded cabling was grounded on one side or both.<br />
Because 10GBase-T uses the frequency range up to more than 400 MHz for data transmissions, it can be<br />
assumed that for unshielded systems it would not be possible to improve radiation enough even if filtering techniques<br />
were applied. The limits for Class A, in contrast, are met by both cabling types.<br />
Other measurements show that for 1000Base-T, both shielded and unshielded cabling can comply with the limits<br />
for Class B.<br />
From these results it can be concluded that, at least in the EU, unshielded cabling systems in residential environments<br />
should not be operated with 10GBase-T. The responsibility for EMC compliance lies with the system<br />
operator. In an office environment, and that also means in a data center, it is also permissible to operate an unshielded<br />
cabling system with 10GBase-T.<br />
Immunity from external interferences<br />
The EMC immunity tests are based on the EN 61000-4-X series of standards and were carried out following the<br />
E1/E2/E3 conditions for different electromagnetic environments in accordance with the MICE table of EN 50173-1<br />
(see next figure).<br />
In these tests the continuous transmission of data between the servers was monitored with a protocol analyzer,<br />
and the transmission system was then subjected to defined stress conditions. It was reported from what stress<br />
level onward the transmission rate was impaired.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 151 of 156
www.datacenter.rdm.com<br />
Stress conditions for Classes E1 to E3 in accordance with the MICE table from EN 50173-1<br />
Immunity against high-frequency radio waves<br />
The test outlined in EN 61000-4-3 serves to examine the immunity of the equipment under test (EUT) against<br />
radiated electromagnetic fields in the frequency range from 80 MHz to 2.0 GHz. This test simulates the influence<br />
of interferences such as radio broadcast and TV transmitters, mobile phones, wireless networks and others.<br />
The transmitting antenna was set up at a distance of 3 m from the EUT, which was then irradiated from all four<br />
sides. The field strength, measured at the location of the equipment under test, was selected in accordance with<br />
the MICE table (Figure, “Radiated high frequency”).<br />
Results in accordance with EN61000-4-3 (left) / Operational test of a mobile phone (right)<br />
All shielded cabling variants are well suited for 10GBase-T operation in offices and light industrial areas. For<br />
harsher industrial areas, an additional overall braided shield (S-FTP design) is necessary. Single- or double-sided<br />
grounding has no influence on the immunity against external radiation. A good unshielded cabling is suitable for<br />
1000Base-T in offices and areas used for light industry. For the use of 10GBase-T with unshielded cabling, however,<br />
additional protective measures are necessary; e.g. metallic raceway systems and/or additional separation<br />
away from the sources of interference.<br />
An additionally performed operational test confirmed the high sensitivity of the unshielded systems to wireless<br />
communications equipment in the 2 m to 70 cm band when operating with 10GBase-T. If a personal radio or<br />
mobile phone was operated at a distance of 3 m (see Figure above), with unshielded cabling there was an<br />
interruption in the data transmission, while all shielded systems experienced no impairment of the transmission.<br />
Immunity against interference due to power cables<br />
A test in accordance with EN 61000-4-4 was carried out in order to check the immunity of the equipment under<br />
test (EUT) against repeated, fast transients as produced by the switching of inductive loads (motors), by relay<br />
contact chatter and by electronic ballasts for fluorescent lamps. In order to achieve reproducible coupling between<br />
the power cable and the EUT, a standardized capacitive coupling clamp was used in this test. The disturbances<br />
overed voltage peaks of 260 - 4000 V with a wave shape of 5/50 ns and a separation of 0.2 ms. The voltage levels<br />
in accordance with the MICE table were applied (see Figure, “Fast transient (burst)”).<br />
Page 152 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Results in accordance with EN 61000-4-4 (left) / Results operational test “mesh cable tray” (right)<br />
The figure left shows the results of this test. All shielded systems allow the use of 10GBase-T in all environmental<br />
conditions (E1, E2, E3). At the same time, higher quality shielding provided better immunity against fast transients.<br />
High-quality unshielded cabling allows the use of 1000Base-T in an office environment. For an industrial environment<br />
and for 10GBase-T, shielded cabling is necessary. Unshielded cabling, if it is to support 10GBase-T, needs<br />
additional protective measures such as the careful separation of the data cables from the power cables.<br />
A double-sided ground improves the immunity of shielded cabling against external fast transients above the<br />
minimum requirements of the standard. If the shield is not continuous, its effectiveness with 10GBase-T is<br />
negated, and the protection is then the same as with unshielded cabling. At lower frequencies (such as used in<br />
1000Base-T), a certain protective effect of the non-continuous shield is apparent, especially with double-sided<br />
grounding.<br />
An operational test with fluorescent lamps that were located 0.5 m from the data cable showed that the test<br />
conditions in the test based on the standard were entirely realistic. The interference that arises when a fluorescent<br />
lamp is switched on influenced the 10GBase-T data transmission in the same way as during the test according to<br />
the standard. Not only the lamp itself, but also the power cable to it, created interference. It must therefore be<br />
ensured that there is sufficient separation of both from the data cabling.<br />
In order to compare the standard test using the coupling clamp to an actual installation situation, the “mesh cable<br />
tray” experiment was also carried out. The data cables and a power cable were laid in a mesh cable tray with a<br />
total length of 30 m and with a constant separation from 0 to 50 cm. The interference signal according to the standard<br />
was then applied to the power cable. The figure shows the results of these measurements. A comparison of<br />
the results with those of the test according to the standard from the figure shows that the standardized test<br />
simulates a separation of the cables of approximately 1 - 2 cm. In order to guarantee the operation of 10GBase-T,<br />
a separation between the data and power cables of at least 30 cm must be maintained with an unshielded system.<br />
The shielded cabling met the requirements even without any separation between the cables.<br />
According to EN 50174-2, a separation of only 2 cm is defined for the unshielded system in this configuration,<br />
which however is insufficient for 10GBase-T. In other words, with unshielded cables, when using 10GBase-T, a<br />
far greater separation must be maintained than is defined in the standard.<br />
A further test in accordance with EN 61000-4-6 was carried out to check the system’s immunity against conducted<br />
RF interference in the range from 150 kHz to 80 MHz on power cables located nearby. Power cables can act as<br />
antennas for high-frequency interference coming from external sources (such as shortwave and VHF transmitters)<br />
or also intentionally subjected to a powerline signal. In this test, too, the previously mentioned coupling clamp was<br />
used. The stress levels were chosen in accordance with the MICE table (see Figure, “Conducted high frequency”).<br />
The results corresponded to the well-known pattern that the shielded cabling meets all requirements for 10GBase-<br />
T. Unshielded cabling meets the requirements for offices and areas with light industry for 1000Base-T, however<br />
for 10GBase-T transmissions additional protective measures such as increased separation of the data cabling<br />
from power cabling is necessary.<br />
Immunity from magnetic fields arising from power cables<br />
This test in accordance with EN61000-4-8 checks the ability of a system to function in the presence of strong<br />
magnetic fields at 50 Hz. These magnetic fields can be generated by power lines (cable or busbars) or powerdistribution<br />
equipment (transformers, distribution panels). The stress levels were selected in accordance with the<br />
MICE table (see Figure, “Magnetic fields”). All cabling fulfills the highest environmental class (E3) with both<br />
1000Base-T and 10GBase-T. No difference was determined between the susceptibility of shielded or unshielded<br />
cabling. No heightened susceptibility of the shielded cabling based on ground loops can be observed.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 153 of 156
www.datacenter.rdm.com<br />
Immunity against electrostatic discharge<br />
This test in accordance with EN 61000-4-2 checks the immunity of a system against electrostatic discharge. This<br />
phenomenon, which we all know from everyday life when a spark jumps from our finger to a conductive surface,<br />
can be generated in a reproducible manner with a test rig with metallic test fingers. Environmental and climactic<br />
conditions – such as low humidity, insulated plastic floors and clothing made of synthetic fibers – can promote<br />
electrostatic charging.<br />
The test points were selected to simulate normal contact of the cabling during operation and maintenance. At<br />
each test point, 10 sparks per polarity were generated with a separation of more than 1 second. The test levels<br />
used were set in accordance with the MICE table (see Figure, “Electrostatic discharge – contact / air”).<br />
The shielded cabling systems did not show sensitivity to electrostatic discharges; no faults arose. With unshielded<br />
cable, the active devices reacted in a very sensitive way to discharges as soon as these could impact on the<br />
signal conductors.<br />
The good performance of shielded cabling can be attributed to the fact that the shield can serve as a bypass path<br />
for the flashover and in this way no energy could make its way inside the cable. For 10GBase-T operation with<br />
unshielded cabling, additional measures must ensure that no electrostatic discharges can take place. Suitable<br />
protective measures are well known in electronics manufacturing and can include grounding stations, ESD wrist<br />
straps, antistatic floors, etc.<br />
Summary<br />
It has been shown that the introduction of 10GBase-T in fact has a considerable impact on the selection of cabling.<br />
The increased sensitivity of 10GBase-T transmissions compared to 1000Base-T was clearly evident with unshielded<br />
cabling in terms of immunity against external interference.<br />
In order to guarantee the operation of 10GBase-T, it is not sufficient to pay attention to the cabling alone, rather<br />
the environmental conditions must also be considered and the cabling components must be properly selected.<br />
Coupling attenuation can serve as a qualitative comparative parameter for the EMC behavior of cabling.<br />
Summing up, this investigation has shown that shielded cabling for 10GBase-T can be used without any problems<br />
in every environmental class. The following applies: The better the quality of the shielding, the smaller the emissions<br />
and the better the immunity of the cabling against external interference.<br />
Unshielded cabling, in contrast, is suited for use outside residential areas only and in conjunction with additional<br />
preventive measures for the use of 10GBase-T. Within the EU, this cabling shall be used only outside residential<br />
areas, in dedicated work areas (like offices, data centers, etc.).<br />
In choosing between shielded or unshielded cabling for 10GBase-T, the influences and applications of additional<br />
protective measures and operational limitations must be taken into consideration.<br />
Recommendations for the operation of 10GBase-T<br />
In industrial environments (Classes E2 and E3), shielded cabling should be used. In harsher industrial environments<br />
(E3), an S-FTP shield design with braided overall shield is necessary and, if possible, a double-sided<br />
grounding should be applied to the cabling.<br />
In residential areas, unshielded cabling should not be used. In office areas and data centers with unshielded<br />
cabling, the above-mentioned additional protective measures should be applied.<br />
Page 154 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
3.10.7 Consolidating the <strong>Data</strong> <strong>Center</strong> and Floor Distributors<br />
When planning a new floor distributor system as part of a cabling system redesigning, a new floor distributor is<br />
usually placed in the data center or server room. This offers the advantage of using a quality infrastructure that<br />
generally already exists, including a raised floor, cooling system, USP, etc.<br />
However, modern EMC-based designs advise against this if copper is used as a tertiary medium. The reason is<br />
that these modern EMC designs are based on a division of a building into different lightning protection zones. The<br />
overvoltage caused by an indirect stroke of lightning will be greatest in the outer area of the building (lightning<br />
protection zone 0) and will lead to an extremely high induction current.<br />
To put it simply, metallic lines into a building from the outside will lead this high current into the building (lightning<br />
protection zone 1). Therefore, in a conventional telephone cabling system surge protection was provided at the<br />
entry of the building. However, there are other also zones in the building itself that would have a problem with the<br />
remaining residual current behind the surge protection element. Electronic components could be destroyed by this<br />
current. This resulted in the definition of other lightning protection zones, where the remaining current diminishes<br />
after each surge protection element in subsequent zones.<br />
<strong>Data</strong> center server units have a very high protection requirement. These units are “packed” in lightning protection<br />
zone 2, so one must take care to minimize the lightning’s residual current within the data center. Designs in accordance<br />
with standards make provisions that each metallic conductor that is routed into or out of the data center be<br />
secured with a surge protection element.<br />
If a floor distributor with a Twisted Pair concept is placed in a data center, each twisted pair line would therefore<br />
have to be protected with a surge protection element of sufficient quality (suppliers for this include Phoenix and<br />
other companies). This is very rarely put into practice in the data cabling system (in contrast to the power cabling<br />
system!), yet the risk does exist.<br />
Recommendation<br />
When setting up a high-availability data center, disregarding this generally established lightning protection zone<br />
concept (DIN EN 62305 / VDE 0185-305) would be negligent, and a floor distributor should therefore not be<br />
placed in the data center if at all possible.<br />
R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0 © 08/2011 Reichle & De-Massari AG Page 155 of 156
www.datacenter.rdm.com<br />
4. Appendix<br />
References<br />
Content from the following encyclopedias, studies, manuals and articles were used in this handbook or can be used as<br />
additional references:<br />
BFE • Energy-efficient data centers through sensitization via transparent cost accounting (original<br />
German title: Stromeffiziente Rechenzentren durch Sensibilisierung über eine transparente<br />
Kostenrechnung)<br />
BITKOM • Operationally reliable data centers (original German title: Betriebssichere Rechenzentren)<br />
• Compliance in IT Outsourcing Projects (original German title: Compliance in IT-Outsourcing-<br />
Projekten)<br />
• Energy efficiency in the <strong>Data</strong> <strong>Center</strong> (original German title: Energieeffizienz im<br />
Rechenzentrum)<br />
• IT Security Compass (original German title: Kompass der IT-Sicherheitsstandards)<br />
• Liability Risk Matrix (original German title: Matrix der Haftungsrisiken)<br />
• Planning Guide for Operationally Reliable <strong>Data</strong> <strong>Center</strong>s (original German title: Planungshilfe<br />
betriebssichere Rechenzentren)<br />
• Certification of Information Security in the Enterprise (original German title: Zertifizierung von<br />
Informationssicherheit im Unternehmen)<br />
CA • The Avoidable Cost of Downtime<br />
COSO • Internal Monitoring of Financial Reporting – Manual for Smaller Corporations – Volume I<br />
Summary (original German title: Interne Überwachung der Finanzberichterstattung –<br />
Leitfaden für kleinere Aktiengesellschaften – Band I Zusammenfassung)<br />
IT Governance Institute • Cobit 4.1<br />
Symantec • IT Risk Management Report 2: Myths and Realities<br />
UBA • Material Stock of <strong>Data</strong> <strong>Center</strong>s in Germany (original German title: Materialbestand der<br />
Rechenzentren in Deutschland)<br />
• Future Market for Energy-Efficient <strong>Data</strong> <strong>Center</strong>s (original German title: Zukunftsmarkt<br />
Energieeffiziente Rechenzentren)<br />
Information from suppliers like IBM, HP, CISCO, DELL, I.T.E.N.O.S., SonicWall and others was also added.<br />
Standards<br />
ANSI/TIA-942 Telecommunication Standard for <strong>Data</strong> <strong>Center</strong>s<br />
ANSI/TIA-942-1, <strong>Data</strong> <strong>Center</strong> Coaxial Cabling Specification and Application Distances<br />
ANSI/TIA-942-2 Telecommunication Standard for <strong>Data</strong> <strong>Center</strong>s Addendum 2 – Additional Guidelines for <strong>Data</strong> <strong>Center</strong>s<br />
ANSI/TIA-568-C.0 Generic Telecommunication Cabling for Customer Premises<br />
ANSI/TIA-568-C.1 Commercial Building Telecommunication Cabling Standard<br />
ANSI/TIA-568-C.2 Balanced Twisted-Pair Telecommunication Cabling and Components Standard<br />
ANSI/TIA-568-C.2 Optical Fiber Cabling and Components Standard<br />
ANSI/TIA/EIA-606-A Administration Standard for Commercial Telecommunications Infrastructure<br />
ANSI/EIA/TIA-607-A Grounding and Bonding Requirements for Telecommunications<br />
ANSI/TIA 569-B Commercial Building Standard for Telecommunications Pathways and Spaces<br />
EIA/TIA-568B.2-10<br />
EN 50310 Application of equipotential bonding and earthing in buildings with information technology equipment.<br />
EN 50173-1 + EN 50173-1/A1 (2009 Information technology – Generic cabling systems – Part 1: General requirements<br />
EN 50173-2 + EN 50173-2/A1 Information technology – Generic cabling systems – Part 2: Office premises<br />
EN 50173-5 + EN 50173-5/A1 Information technology – Generic cabling systems – Part 5: <strong>Data</strong> centers<br />
EN 50174-1 Information technology – Cabling installation – Part 1: Specification and quality assurance<br />
EN 50174-2 Information technology – Cabling installation – Part 2: Installation planning and practices inside buildings<br />
EN 50346 +A1 +A2 Information technology – Cabling installation – Testing of installed cabling<br />
IEEE 802.3an<br />
ISO/IEC 11801 2nd Edition Amendment 1 (04/2008) + Amendment 2 (02/2010) Information technology – Generic cabling for<br />
customer premises<br />
ISO/IEC 24764 Information technology – Generic cabling systems for data centers<br />
ISO/IEC 14763-1 AMD 1 Information technology - Implementation and operation of customer premises cabling - Part 1:<br />
Administration; Amendment 1<br />
ISO/IEC 14763-2 Information technology – Implementation and operation of customer premises cabling – Part 2: Planning and<br />
installation<br />
ISO/IEC 14763-3 Information technology – Implementation and operation of customer premises cabling – Part 3: Testing of<br />
optical fibre cabling (ISO/IEC 14763-3:2006 + A1:2009)<br />
ISO/IEC 18010 Information technology – Pathways and spaces for customer premises cabling<br />
ISO/IEC 61935-1 Specification for the testing of balanced and coaxial information technology cabling – Part 1: Installed balanced<br />
cabling as specified in EN 50173 standards (IEC 61935-1:2009, modified); German version EN 61935-1:2009<br />
Please find additional information on R&M products and solutions on our website: www.rdm.com<br />
Page 156 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0
www.datacenter.rdm.com<br />
Headquarters<br />
Reichle & De-Massari AG<br />
Binzstrasse 31<br />
CHE-8620 Wetzikon/Switzerland<br />
Phone +41 (0)44 933 81 11<br />
Fax +41 (0)44 930 49 41<br />
www.rdm.com<br />
Your local R&M partners<br />
Australia<br />
Austria<br />
Belgium<br />
Bulgaria<br />
Czech Republic<br />
China<br />
Denmark<br />
Egypt<br />
France<br />
Finland<br />
Germany<br />
Great Britain<br />
Hungary<br />
India<br />
Italy<br />
Japan<br />
Jordan<br />
Kingdom of Saudi Arabia<br />
Korea<br />
Netherlands<br />
Norway<br />
Poland<br />
Portugal<br />
Romania<br />
Russia<br />
Singapore<br />
Spain<br />
Sweden<br />
Switzerland<br />
United Arab Emirates