22.05.2017 Views

nx.os.and.cisco.nexus.switching.2nd.edition.1587143046

Nexus Switching 2nd Edition

Nexus Switching 2nd Edition

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 14. Nexus Migration Case Study<br />

The case study in this chapter demonstrates a migration from a traditional Catalyst-based<br />

data center infrastructure to a next-generation Nexus-based data center. This case study<br />

brings together a great deal of the topics discussed individually throughout this book <strong>and</strong><br />

combines them with work performed by the authors in Cisco engineering labs <strong>and</strong> multiple<br />

real-world customer scenari<strong>os</strong>, including their requirements, designs, <strong>and</strong> deployments. This<br />

case study can provide a foundation for organizations looking to make a similar migration.<br />

Existing Environment<br />

Acme, Inc., is a large global enterprise with many diverse lines of business. Its information<br />

technology group is a centralized group that provides technology services to all lines of<br />

business. Acme, Inc., has been a long-time Cisco customer with a great track record of<br />

success with the Catalyst 6500 switching family. Acme, Inc., has ch<strong>os</strong>en to use the industryleading<br />

practice of a three-tier design.<br />

The server access is spread between six pairs of Catalyst 6509 switches. Each pair of<br />

6509s serves a specific area (region) of the data center <strong>and</strong> is distributed throughout the data<br />

center accordingly. Each region of the data center contains a defined set of approximately<br />

four to five virtual local area networks (VLANs) dedicated to specific server types <strong>and</strong><br />

functions, that is, a VLAN for UNIX, a VLAN for Windows, <strong>and</strong> so on. Each pair of 6509s<br />

provides redundancy for the servers’ default gateway by using Hot-St<strong>and</strong>by Router Protocol<br />

(HSRP).<br />

To optimize traffic flows for server-to-server communication within the data center, an L3<br />

server distribution layer has been introduced. The primary role of this layer is to<br />

interconnect each of the six access pairs <strong>and</strong> provide network connectivity to the core layer.<br />

All connectivity to this layer is achieved using routed point-to-point links.<br />

The core layer provides connectivity to the campus network <strong>and</strong> connects to various WAN<br />

routers for remote office connectivity. This layer is completely Layer 3 routed using EIGRP<br />

as the IGP. All connectivity into this layer is done with routed point-to-point links.<br />

Figure 14-1 shows the existing environment.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!