16.01.2015 Views

R&M Data Center Handbook

R&M Data Center Handbook

R&M Data Center Handbook

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

www.datacenter.rdm.com<br />

Summary<br />

Despite the validity of cabling standards for data centers, it<br />

must be assumed that a cabling system designed in accordance<br />

with these standards will not necessarily ensure the<br />

same usability period as required standards for office<br />

cabling. Designs with a requirements-oriented cabling<br />

system, and the modified design and material selection that<br />

are derived from this, must not be described as incorrect.<br />

They represent an alternative and may be a reasonable<br />

option in smaller data centers or server rooms.<br />

<strong>Data</strong> centers must be planned like a separate building<br />

within a building, and a cabling system that is structured<br />

and static therefore becomes more likely – at least between<br />

distributors. In any event, planners must ensure that capacity<br />

(in terms of transmission rate) is valued more than<br />

flexibility. Quality connection technology must ensure that<br />

fast changes and fast repairs are possible using standard<br />

materials. This is the only way to be able to guarantee the<br />

high availabilities that are required.<br />

<strong>Data</strong> center planners and operators will also face the<br />

question of whether using single mode glass fiber is still<br />

strictly necessary. The performance level of the laseroptimized<br />

multimode glass fibers available today already<br />

open up numerous options for installation. The tables above<br />

showed what potentials can be covered with the OM3 and<br />

OM4 fibers. Do single-mode fibers therefore have to be<br />

included in examinations of transmission rate and transmission<br />

link<br />

A criterion that may make single mode installations absolutely necessary is, for example, the length restriction for<br />

OM3/OM4 from IEEE 802.3ba (40/100 GbE). The 150 meter limitation with OM4 fiber for 40/10 Gigabit Ethernet<br />

would likely be exceeded in installations with extensive data center layouts. Outsourced units, like backup<br />

systems stored in a separate building, could not be integrated at all. In this case, single mode is the only technical<br />

option. A cost comparison is not necessary.<br />

The high costs of transceivers for single mode transmission technology are a crucial factor in configurations that<br />

can be used with either multimode or single mode. Experience shows that a single mode link costs three to four<br />

times more than a multimode link. With multimode up to double the port density be achieved. However, highly<br />

sophisticated parallel optic OM4 infrastructures also require considerable investments. The extent to which the<br />

additional expenses are compensated through lower cabling costs (CWDM on single mode vs. parallel optic OM4<br />

systems) can only be determined by an overall examination of the scenario in question. In addition, the comparably<br />

higher energy consumption is also a factor. Single mode consumes about three to five times more watts<br />

per port than multimode.<br />

Whether one uses multimode length restrictions as a basis for a decision in favor of single mode depends upon<br />

strategic considerations in addition to a cost comparison. Single mode offers a maximum of performance<br />

reserves, in other words a guaranteed future beyond current standards. Planners who want to eliminate restricting<br />

factors over multiple system generations and are willing to accept a bandwidth/length overkill in existing applications<br />

should plan on using single mode.<br />

3.10 Implementations and Analyses<br />

The 40 GbE specification is geared to High-Performance Computing (HPC) and<br />

storage devices, and is very well suited for the data center environment. It supports<br />

servers, high-performance clusters, blade servers and storage networks (SAN).<br />

The 100 GbE specification is focused on core network applications – switching,<br />

routing, interconnecting data centers, Internet exchanges (Internet nodes) and<br />

service provider Peering Points (PP). High-performance computing environments<br />

with bandwidth-intensive applications like Video-on-Demand (VoD) will profit from<br />

100 GbE technology. Since data center transmission links typically do not exceed<br />

100 m, 40/100 GbE components will be significantly more economical than OS1<br />

and OS2 single mode components, and still noticeably improve performance.<br />

Page 124 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!