27.01.2015 Views

Sun Blade 6048 Chassis White Paper - Q Associates

Sun Blade 6048 Chassis White Paper - Q Associates

Sun Blade 6048 Chassis White Paper - Q Associates

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

SUN BLADE 6000 AND <strong>6048</strong><br />

MODULAR SYSTEMS<br />

Open Modular Architecture with a Choice of<br />

<strong>Sun</strong> SPARC®, Intel® Xeon®, and AMD Opteron Platforms<br />

<strong>White</strong> <strong>Paper</strong><br />

June 2008


<strong>Sun</strong> Microsystems, Inc.<br />

Table of Contents<br />

Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />

An Open Systems Approach to Modular Architecture . . . . . . . . . . . . . . . . . . . . . . 2<br />

The Promise of <strong>Blade</strong> Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

The <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

Open and Modular System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview . . . . . . . . . . . . . . . . . . . . . 12<br />

<strong>Chassis</strong> Front Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />

<strong>Chassis</strong> Rear Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />

Passive Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

Server Modules Based on <strong>Sun</strong> SPARC, Intel Xeon, and AMD Opteron Processors . . . 19<br />

A Choice of Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

Server Module Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

<strong>Sun</strong> <strong>Blade</strong> T6320 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

<strong>Sun</strong> <strong>Blade</strong> T6300 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

<strong>Sun</strong> <strong>Blade</strong> X6220 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />

<strong>Sun</strong> <strong>Blade</strong> X6250 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />

<strong>Sun</strong> <strong>Blade</strong> X6450 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />

I/O Expansion, Networking, and Management . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

Server Module Hard Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

PCI Express ExpressModules (EMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

PCI Express Network Express Modules (NEMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />

Transparent and Open <strong>Chassis</strong> and System Management . . . . . . . . . . . . . . . . . . . . 49<br />

<strong>Sun</strong> xVM Ops Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52


1 Executive Summary <strong>Sun</strong> Microsystems, Inc.<br />

Executive Summary<br />

The Participation Age is driving new demands that are focused squarely on the<br />

capabilities of the datacenter. Web services and rapidly escalating Internet use are<br />

driving competitive organizations to lead with innovative new services and scalable,<br />

dynamic infrastructure. High performance computing (HPC) is constantly finding new<br />

applications in both science and industry, fostering new demands for performance and<br />

density. Agility is paramount, and organizations must be able to respond quickly to<br />

unpredictable needs for capacity — adding compute power or growing services on<br />

demand. At the same time, most datacenters are rapidly running out of space, power,<br />

and cooling even as energy costs continue to rise. Rapid growth must be met with<br />

consolidated infrastructure, controlled and predictable costs, and efficient<br />

management practices. Simply adding more low-density power-consumptive servers is<br />

clearly not the answer.<br />

<strong>Blade</strong> server architecture offers considerable promise toward addressing these issues<br />

through increased compute density, improved serviceability, and lower levels of<br />

exposed complexity. Unfortunately, most legacy blade platforms don't provide the<br />

necessary flexibility needed by many of today's Web services and HPC applications.<br />

Complicating matters, many legacy blade server platforms lock customers into a<br />

proprietary and vendor-specific infrastructure that often requires redesign of existing<br />

network, management, and storage environments. These legacy chassis designs also<br />

often artificially constrain expansion capabilities. As a result, traditional blade<br />

architectures have been largely restricted to low-end Web and IT services.<br />

Responding to these challenges, the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems<br />

provide an open modular architecture that delivers the benefits of blade architecture<br />

without common drawbacks. Optimized for performance, efficiency, and density, these<br />

platforms take an open systems approach, employing the latest processors, operating<br />

systems, industry-standard I/O modules, and transparent networking and<br />

management. With a choice of server modules based on <strong>Sun</strong> SPARC®, Intel® Xeon®,<br />

and AMD Opteron processors, organizations can select the platforms that best match<br />

their applications or existing infrastructure, without worrying about vendor lock-in.<br />

Together with the successful <strong>Sun</strong> <strong>Blade</strong> 8000 and 8000 P modular systems, the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 and <strong>6048</strong> modular systems present a comprehensive multitier blade<br />

portfolio that lets organizations deploy the broadest range of applications on the most<br />

ideal platforms. The result is modular architecture that serves the needs of the<br />

datacenter and the goals of the business while protecting existing investments into the<br />

future. This document describes the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems along<br />

with their key applications, architecture, and components.


2 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Chapter 1<br />

An Open Systems Approach to Modular Architecture<br />

Organizations operating traditional IT infrastructure, business processing, and back<br />

office applications are always looking for ways to cut costs and safely consolidate<br />

infrastructure. For many, large numbers of older and less efficient systems constrain the<br />

ability to grow and adapt, both physically and computationally. Emerging segments<br />

such as Web services along with a renewed focus on high performance computing<br />

(HPC) are demanding computational performance, density, and dramatic scalability.<br />

With most datacenters constrained by space, heat, or power, these issues are very real.<br />

Successful solutions must be efficient, cost effective, and reliable with investment<br />

protection factored into fundamental design considerations.<br />

Fortunately, new technology is yielding opportunities for increased efficiency and<br />

flexibility in the datacenter. Dual and multicore processor technologies are doubling<br />

compute density every other year. Virtualization technologies and more powerful<br />

servers are making it possible to consolidate widely distributed datacenters using<br />

smaller numbers of more powerful servers. Standard high bandwidth networking and<br />

interconnect technologies are becoming more affordable. Modern provisioning<br />

technology makes it possible to dynamically readjust workloads on the fly.<br />

Regrettably, most current server form factors have failed to take full advantage of these<br />

trends. For instance, most traditional rackmount servers require a box swap in order to<br />

allow an organization to deploy new CPU and I/O technology. Modular architecture<br />

offers the opportunity to rapidly harvest the returns of new technology advances, while<br />

serving the constantly changing needs of the enterprise.<br />

The Promise of <strong>Blade</strong> Architecture<br />

At its best, modular or blade server architecture blends the enterprise availability and<br />

management features of vertically-scalable platforms with the scalability and economic<br />

advantages of horizontally-scalable systems. In general, modular architectures offer<br />

considerable promise, and can contribute to:<br />

• Higher compute density — providing more processing power per rack unit (RU) than<br />

with rackmount systems<br />

• Increased serviceability and availability — featuring shared common system<br />

components such as power, cooling, and I/O interconnects<br />

• Reduced complexity — through fewer required components, cable and component<br />

aggregation, and consolidated management<br />

• Faster service expansion and bulk deployment — letting organizations expand or<br />

scale existing services and flexibly pre-provision chassis and I/O components<br />

• Lowered costs — since modular servers can be less expensive to acquire, easier to<br />

service, and easier to manage


3 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

While some organizations adopted first-generation blade technology for Web servers or<br />

simple IT infrastructure, many legacy blade platforms have not been able to deliver on<br />

this promise for a broader set of applications. Part of the problem is that most legacy<br />

blade systems are based on proprietary architectures that lock adopters into an<br />

extensive infrastructure that constrains deployment. In addition, though vendors<br />

typically try to price server modules economically, they often charge a premium for the<br />

required proprietary I/O and switching infrastructure. Availability of suitable<br />

computational platforms has also been problematic.<br />

Together, these constraints caused trade-offs in both features and performance that<br />

had to be weighed when considering blade technology for individual applications:<br />

• Power and cooling limitations often meant that processors were limited to less<br />

powerful mobile versions.<br />

• Limited processing power, memory capacity, and I/O bandwidth severely constrained<br />

the applications that could be deployed on blade server platforms.<br />

• Proprietary tie-ins and other constraints in chassis design dictated networking<br />

topology, and limited I/O expansion possibilities to a small number of proprietary<br />

modules.<br />

These compromises in chassis design were largely the result of a primary focus on<br />

density — with smaller chassis requiring small-format server modules. Ultimately these<br />

designs limited the broad application of blade technology.<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems<br />

To address the shortcomings of earlier blade platforms, <strong>Sun</strong> started with a design point<br />

focused on the needs of the datacenter, rather than with preconceptions of chassis<br />

design. With this innovative and truly modular approach and a no-compromise feature<br />

set, the newly expanded <strong>Sun</strong> <strong>Blade</strong> family of modular systems offers considerable<br />

advantages for a wide range of applications. Organizations gain the promised benefits<br />

of blades, and can save more by deploying a broader range of their applications on<br />

modular system platforms.<br />

• Scalable, Expandable, and Serviceable Multitier Architecture<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems let organizations deploy multitier<br />

applications on a single unified modular architecture. These systems support all<br />

major volume CPU architectures, including UltraSPARC® T1 and T2 processors with<br />

CoolThreads technology, the Intel Xeon processor, and Next Generation AMD<br />

Opteron processors. The Solaris Operating System (Solaris OS) is supported<br />

uniformly on all platforms, and support is also provided for Linux and Windows<br />

operating systems as appropriate.<br />

By offering the fastest AMD, Intel, and UltraSPARC T1 and T2 processors available,<br />

large memory, and high I/O capacity, these systems support a very broad range of<br />

applications. In addition, the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems achieve


4 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

better power efficiency by consolidating power and cooling infrastructure for<br />

multiple systems into the modular system chassis. The result is high-performance<br />

infrastructure that packs more performance and functionality into a smaller space<br />

— both in terms of real estate as well as power envelope.<br />

With innovative chassis design, <strong>Sun</strong> <strong>Blade</strong> modular systems allow organizations to<br />

take full advantage of future technology without “forklift upgrades.”<br />

Organizations can independently service, upgrade, and expand compute, I/O,<br />

power, cooling, and management modules. All major components are hot<br />

pluggable and hot swappable, including I/O modules.<br />

• <strong>Sun</strong> <strong>Blade</strong> Transparent Management<br />

Many blade vendors provide management solutions that lock organizations into<br />

proprietary management tools. With the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular<br />

systems, customers have the choice of using their existing management tools or<br />

<strong>Sun</strong> <strong>Blade</strong> Transparent Management. <strong>Sun</strong> <strong>Blade</strong> Transparent Management is a<br />

standards-based cross-platform tool that provides direct management over<br />

individual server modules and direct management of chassis-level modules using<br />

<strong>Sun</strong> Integrated Lights out Management (ILOM). With direct management access<br />

to server modules, existing or favorite management tools from <strong>Sun</strong> or third<br />

parties can be used. With this approach, administrative staff productivity can be<br />

retained, with no additional training or changes in management practices.<br />

• Open and Independent Industry Standard I/O<br />

The <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems provide a cable-once architecture<br />

with complete hardware isolation of compute and I/O modules. <strong>Sun</strong> supports true<br />

industry standard I/O on its modular systems platforms with a design that<br />

completely separates CPU and I/O modules. <strong>Sun</strong> <strong>Blade</strong> modular systems utilize<br />

standard PCI Express I/O architecture and adapters, the same technology that<br />

dominates the rackmount server industry. I/O adapters from multiple vendors are<br />

available to work with <strong>Sun</strong> <strong>Blade</strong> modular systems.<br />

A truly modular design based on industry standard hot-pluggable I/O means that<br />

systems are easier to install and service — providing simpler administration,<br />

higher reliability, and better compatibility with existing network and storage<br />

environments. For instance, replacing an I/O module in a <strong>Sun</strong> <strong>Blade</strong> modular<br />

system requires less than a minute.<br />

• Highly-Efficient Cooling<br />

Traditional blade platforms have a reputation for being hot and unreliable — a<br />

reputation caused by systems with insufficient cooling and chassis airflow. Not<br />

only do higher temperatures negatively impact electronic reliability, but hot and<br />

inefficient systems require more datacenter cooling infrastructure, with its


5 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

associated footprint and power draw. In response, the <strong>Sun</strong> <strong>Blade</strong> 6000 modular<br />

system provides optimized cooling and airflow that can lead to reliable system<br />

operation and efficient datacenter cooling.<br />

In fact, <strong>Sun</strong> <strong>Blade</strong> modular systems deliver the same cooling and airflow capacity<br />

of <strong>Sun</strong>’s rackmount systems — for both SPARC and x64 server modules —<br />

resulting in reliable system operation and less required cooling infrastructure.<br />

Better airflow can translate directly into better reliability, reduced downtime, and<br />

improved serviceability. These systems also help organizations meet growing<br />

demand while preserving existing datacenters.<br />

• Virtually Unmatched Investment Protection with the <strong>Sun</strong> SM Refresh Service<br />

Computing technology is constantly evolving, delivering improved performance<br />

and new energy efficiencies over time. Unfortunately, this progress combined with<br />

traditional purchasing models often results in server sprawl as businesses add new<br />

servers year over year to meet growing needs for computational infrastructure.<br />

This consumptive model causes real issues, driving datacenter buildout and power<br />

and cooling costs that are often well in excess of hardware acquisition costs.<br />

The <strong>Sun</strong> SM Refresh Service for <strong>Sun</strong> <strong>Blade</strong> Modular Systems lets organizations break<br />

away from the traditional “acquire-and-depreciate” life cycle — replenishing<br />

datacenters with fresh technology and providing virtually unmatched investment<br />

protection. With this service, IT managers can adapt to ongoing changes in<br />

technology and business needs at lower costs, refreshing the datacenter<br />

frequently in order to reap the benefits offered by the latest advancements in<br />

technology. Increasing the productivity of datacenter infrastructure with the <strong>Sun</strong><br />

Refresh Service also minimizes the need to add more datacenter space.<br />

<strong>Sun</strong> <strong>Blade</strong> modular systems in particular complement this approach, since<br />

compute elements can be easily upgraded with minimal disruption to the rest of<br />

the infrastructure. Careful planning has gone into <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong><br />

modular systems to help ensure that they provides the power, cooling, and I/O<br />

headroom to operate future server modules. The <strong>Sun</strong> Refresh Service is being<br />

expanded in phases to different geographies around the world. Please check<br />

http://www.sun.com/blades for service availability in desired locations.<br />

Open and Modular System Architecture<br />

Along with the <strong>Sun</strong> <strong>Blade</strong> 8000 and 8000 P modular systems, the <strong>Sun</strong> <strong>Blade</strong> 6000 and<br />

<strong>6048</strong> modular systems provide a new approach to modular system architecture. This<br />

approach combines careful long-term chassis design with an open and standard<br />

systems architecture.


6 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Innovative Industry-Standard Design<br />

Providing choice in modular system platforms is essential, both to help enable the<br />

broadest set of applications, and to provide the best investment protection for a range<br />

of different organizations and their requirements. <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular<br />

systems offer choice and key innovations for modular computing.<br />

• A Choice of Processor Architectures and Operating Systems<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems support a range of full performance<br />

and full featured <strong>Sun</strong> <strong>Blade</strong> 6000 server modules.<br />

– The <strong>Sun</strong> <strong>Blade</strong> T6320 server module offers support for the massively-threaded<br />

UltraSPARC T2 processor with either four, six, or eight cores, up to 64 threads<br />

and support for 64 GB of memory.<br />

– The <strong>Sun</strong> <strong>Blade</strong> T6300 server module provides a single socket for an<br />

UltraSPARC T1 processor, featuring either six or eight cores, up to 32 threads,<br />

and support for up to 32 GB of memory.<br />

– The <strong>Sun</strong> <strong>Blade</strong> X6220 server module provides support for two Next Generation<br />

AMD Opteron 2000 Series processors and support for up to 64 GB of memory.<br />

– The <strong>Sun</strong> <strong>Blade</strong> X6250 server module provides two sockets for Dual-Core Intel<br />

Xeon Processor 5100 series or two Quad-Core Intel Xeon Processor 5300 series<br />

CPUs with up to 64 GB of memory per server module.<br />

– The <strong>Sun</strong> <strong>Blade</strong> X6450 server module provides four sockets for Dual-Core Intel<br />

Xeon Processor 7200 series or Quad-Core Intel Xeon Processor 7300 series CPUs,<br />

with up to 96 GB of memory per server module.<br />

Each server module provides significant I/O capacity as well, with up to 32 lanes of<br />

PCI Express bandwidth delivered from each server module to the multiple<br />

available I/O expansion modules (a total of up to 142 Gb/s per supported per<br />

server module). To enhance availability, server modules have no power supply or<br />

fans and feature four hot-swap disks with hardware RAID built in. Organizations<br />

can deploy server modules based on the processors and operating system that<br />

best serve their applications or environment. Different server modules can be<br />

mixed and matched in a single chassis, and deployed and redeployed as needs<br />

dictate.<br />

• Complete Separation Between CPU and I/O Modules<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular system design avoids compromises because it<br />

provides a complete separation between CPU and I/O modules. Two types of I/O<br />

modules are supported.<br />

– Up to two industry-standard PCI Express ExpressModules (EMs) can be dedicated<br />

to each server module.<br />

– Up to two PCI Express Network Express Modules (NEMs) provide bulk IO for all of<br />

the server modules installed in the system.


7 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Through this flexible approach, server modules can be configured with different<br />

I/O options depending on the applications they host. I/O modules are hot-plug,<br />

and customers can choose from <strong>Sun</strong>-branded or third-party adapters for<br />

networking, storage, clustering, and other I/O functions.<br />

• Transparent <strong>Chassis</strong> Management Infrastructure<br />

Within the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems, a <strong>Chassis</strong> Monitoring<br />

Module (CMM) works in conjunction with the service processor on each server<br />

module to form a complete and transparent management solution. Each <strong>Sun</strong><br />

<strong>Blade</strong> 6000 server module contains its own directly addressable management<br />

service processor that is accessible through the CMM. Though similar in function,<br />

these service processors vary with the individual server modules. Generally, these<br />

service processors support Lights Out Management (LOM), and provide support for<br />

IPMI, SNMP, CLI (through serial console or SSH), and HTTP(S) management<br />

methods. In addition, <strong>Sun</strong> xVM Ops Center (formerly <strong>Sun</strong> Connection and <strong>Sun</strong> N1 <br />

System Manager software ) provides discovery, aggregated management, and<br />

bulk deployment for multiple systems.<br />

• Innovative and Highly-Reliable <strong>Chassis</strong> Design for Different Needs<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems are intended for a long life, with a<br />

design that assumes ongoing improvements in technology. The chassis integrates<br />

AC power supplies and cooling fans for all of the server and I/O modules. This<br />

approach keeps these components off of the server modules, making them<br />

efficient and more reliable. Power supplies and fans in the chassis are designed for<br />

ease-of-service, hot-swappability, and redundancy. The chassis provides power and<br />

cooling infrastructure to support current and future CPU and memory<br />

configurations, helping to ensure that the chassis life-cycle will span multiple<br />

generations of processor upgrades. All modular components such as the CMM,<br />

server modules, EMs, and NEMs are hot-plug capable. In addition, I/O paths can<br />

be configured in a redundant fashion.<br />

• One Architecture with a Choice of <strong>Chassis</strong><br />

Organizations need modular chassis that allow them to deploy exactly the amount<br />

of processing and I/O that they require, while scaling effectively to meet their<br />

needs. With a single unified architecture, <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular<br />

systems provide different levels of capacity. For smaller incremental growth, the<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 modular system is provided in a compact rackmount chassis that<br />

occupies 10 rack units (10 RU). Each <strong>Sun</strong> <strong>Blade</strong> 6000 chassis can house up to 10<br />

server modules, providing support for up to 40 server modules per rack. Designed<br />

for maximum density and scalability, the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system features<br />

a standard rack-size chassis that facilitates the deployment of high-density<br />

infrastructure. By eliminating all of the hardware typically used to rack-mount


8 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

individual blade chassis, the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system provides 20 percent<br />

more usable space in the same physical footprint. Up to 48 server <strong>Sun</strong> <strong>Blade</strong> 6000<br />

server modules can be deployed in a single <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system.<br />

A Choice of <strong>Sun</strong> SPARC®, Intel® Xeon®, and AMD Opteron Processors<br />

Legacy blade platforms were often restrictive in the processor architectures they<br />

supported, limiting innovation for modular systems and forcing difficult architectural<br />

choices for adopters. In contrast, <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems offer a<br />

choice of server modules based on UltraSPARC T2 or T1 processors, Intel Xeon<br />

processors, or Next Generation AMD Opteron 2000 Series processors. In addition, <strong>Sun</strong><br />

<strong>Blade</strong> 6000 server modules provide large memory capacities, while the individual<br />

chassis provide significant power and cooling capacity. The available <strong>Sun</strong> <strong>Blade</strong> 6000<br />

server modules are described below.<br />

• <strong>Sun</strong> <strong>Blade</strong> T6320 Server Module<br />

Based on the Industry’s first massively threaded system on a chip (SoC), the<br />

UltraSPARC T2 processor based <strong>Sun</strong> <strong>Blade</strong> T6320 Server module brings nextgeneration<br />

chip multithreading (CMT) to a modular system platform. Building on<br />

the strengths of its predecessor, the UltraSPARC T2 processor offers support for<br />

eight threads per core, and integrates memory control, caches, networking, I/O,<br />

and cryptography on the processor die. Four-, six-, and eight-core UltraSPARC T2<br />

processors are supported, yielding up to 64 threads. Like <strong>Sun</strong>’s rackmount <strong>Sun</strong><br />

SPARC Enterprise T5120 and T5220 servers, the <strong>Sun</strong> <strong>Blade</strong> T6320 server module<br />

provides significant memory bandwidth with support for 667 MHz Fully-Buffered<br />

DIMMs (FBDIMMs). Up to 16 FBDIMMs can be installed to support up to 64 GB of<br />

memory. Individual <strong>Sun</strong> <strong>Blade</strong> T6320 server modules can provide industry-leading<br />

performance as measured by the Space, Watts, and Performance (SWaP) metric 1 .<br />

• <strong>Sun</strong> <strong>Blade</strong> T6300 Server Module<br />

The <strong>Sun</strong> <strong>Blade</strong> T6300 server module utilizes the successful UltraSPARC T1<br />

processor. With a single socket for a six- or eight- core UltraSPARC T1 processor, up<br />

to 32 threads can be supported for applications that require substantial amounts<br />

of throughput. Similar to the <strong>Sun</strong> Fire / SPARC Enterprise T2000 server, the server<br />

module uses all four of the processor’s memory controllers, providing large<br />

memory bandwidth. Up to eight DDR2 533 DIMMs at 400 MHz can be installed for<br />

a maximum of 32 GB of RAM per server module.<br />

• <strong>Sun</strong> <strong>Blade</strong> X6220 Server Module<br />

Ideal for consolidation in x64 environments, the <strong>Sun</strong> <strong>Blade</strong> X6220 server module<br />

provides support for two Next Generation AMD Opteron 2000 Series processors,<br />

with dual cores per processor. Sixteen memory slots are provided for a total of up<br />

to 64 GB of RAM with 667 MHz DDR2 DIMMs. Organizations can consolidate IT and<br />

1.1. For more information on the SWaP metric, along with the latest benchmark results, please see<br />

www.sun.com/swap.


9 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Web services infrastructure at a fraction of the cost of competing x64 servers or<br />

blades. The <strong>Sun</strong> <strong>Blade</strong> X6220 server module also delivers industry-leading floating<br />

point performance helping to empower HPC applications that require both<br />

computational density and performance.<br />

• <strong>Sun</strong> <strong>Blade</strong> X6250 Server Module<br />

The <strong>Sun</strong> <strong>Blade</strong> X6250 server module is ideal for x64 applications, such as those at<br />

the Web and application tiers, and is also appropriate for HPC applications. Two<br />

sockets are provided for Dual-Core Intel Xeon Processor 5100 series or Quad-Core<br />

Intel Xeon Processor 5300 series CPUs. A high memory density of up to 64 GB gives<br />

the <strong>Sun</strong> <strong>Blade</strong> X6250 server module considerable capacity. This server module also<br />

provides industry-leading integer performance and unconstrained I/O capacity as<br />

compared to other Intel Xeon Processor-based blade servers.<br />

• <strong>Sun</strong> <strong>Blade</strong> X6450 Server Module<br />

The <strong>Sun</strong> <strong>Blade</strong> X6450 server module is ideal for x64 applications and scalable<br />

workloads such as databases and HPC applications. Four sockets are provided for<br />

Dual-Core Intel Xeon Processor 7200 series or Quad-Core Intel Xeon Processor 7300<br />

series CPU, offering strong integer performance characteristics. Up to 24 FB-<br />

DIMMs are supported, yielding a large memory capacity of up to 96 GB using 4 GB<br />

FB-DIMMs. Industry-leading I/O capacity is provided as compared to other Intel<br />

Xeon Processor-based blade servers.<br />

Modular and “Future-Proof” <strong>Chassis</strong> Design<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems provide significant improvements over<br />

legacy server module platforms. <strong>Sun</strong>’s focus on the needs of the datacenter have<br />

resulted in chassis designs that don’t force compromises in the performance and<br />

capabilities delivered by the server modules. For example, in addition to offering a<br />

choice of server modules that support the latest volume processors, these systems<br />

deliver 100 percent of system I/O to the I/O modules through a passive midplane.


10 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

The <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular system chassis are shown in Figure 1. The <strong>Sun</strong><br />

<strong>Blade</strong> 6000 modular system is provided in a 10 rack unit (10U) chassis with up to four<br />

chassis supported in a single 42U rack or three chassis supported in a 38U rack. The <strong>Sun</strong><br />

<strong>Blade</strong> <strong>6048</strong> modular system chassis takes the form of a standard rack and features four<br />

independent shelves<br />

Figure 1. <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems (left and right respectively)<br />

Both the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems support flexible configuration, and<br />

are built from a range of standard hot-plug, hot-swap modules, including:<br />

• <strong>Sun</strong> <strong>Blade</strong> T6320, T6300, X6220, X6250, or X6450 server modules, in any combination<br />

• <strong>Blade</strong>-dedicated PCI Express ExpressModules (EM), supporting industry-standard PCI<br />

Express interfaces<br />

• PCI Express Network Express Modules (NEMs), providing access and an aggregated<br />

interface to all of the server modules in the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis or <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong><br />

shelf<br />

• Integral <strong>Chassis</strong> Monitoring Module (CMM) for transparent management access to<br />

individual server modules<br />

• Hot-swap (N+N) power supply modules<br />

• Redundant (N+1) cooling fans<br />

With common system components and a choice of chassis, organizations can scale<br />

capacity with either fine or course granularity, as their needs dictate. Table 1 lists the<br />

capacities of the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems along with single-shelf


11 An Open Systems Approach to Modular Architecture <strong>Sun</strong> Microsystems, Inc.<br />

capacity in the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system. Maximum numbers of sockets, cores,<br />

and threads are listed for AMD Opteron, Intel Xeon, and UltraSPARC T1 and T2<br />

processors.<br />

Category<br />

Table 1. <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular system capacities<br />

<strong>Sun</strong> <strong>Blade</strong> 6000<br />

modular system<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong><br />

modular shelf<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 server modules 10 12 48<br />

PCI Express Express Modules 20 24 96<br />

PCI Express Network Express Modules Up to 2 Up to 2 Up to 8<br />

<strong>Chassis</strong> monitoring modules (CMM) 1 1 4<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong><br />

modular system<br />

Hot-swap power supplies (N+N) 2, 6000 Watt 2, 8400 Watt 8, 8400 Watt<br />

Redundant cooling fans (N+1) 6 8 32<br />

Maximum AMD Opteron sockets/cores/threads 20/40/40 24/48/48 96/192/192<br />

Maximum Intel Xeon sockets/cores/threads 40/160/160 48/192/192 192/768/768<br />

Maximum UltraSPARC T1 sockets/cores/threads 10/80/320 12/96/384 48/384/1536<br />

Maximum UltraSPARC T2 sockets/cores/threads 10/80/640 12/96/768 48/384/3072


12 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

Chapter 2<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems<br />

Overview<br />

Together with the <strong>Sun</strong> <strong>Blade</strong> 8000 and 8000 P modular systems, <strong>Sun</strong> <strong>Blade</strong> 6000 and<br />

<strong>6048</strong> modular systems bring significant advancements to deploying modular systems<br />

across the organization. <strong>Sun</strong> <strong>Blade</strong> 6000 modular system are ideal for delivering<br />

maximum entry-level price/performance with superior features as compared to<br />

traditional rackmount servers. With its standard rack-sized chassis and high density, the<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system helps enable the streamlined deployment of dense and<br />

highly-scalable datacenters. Supporting a choice of x64 or SPARC platforms, <strong>Sun</strong> <strong>Blade</strong><br />

6000 and <strong>6048</strong> modular systems are ideal for a variety of applications and markets.<br />

• Web Services<br />

For Web services applications sized to take advantage of two-socket x64 server<br />

economy, the <strong>Sun</strong> <strong>Blade</strong> 6000 modular system delivers one of the industry’s most<br />

compelling solutions. The system offers maximum performance, enterprise<br />

reliability, and easy scalability at a fraction of the price of competing products.<br />

The stateless approach of modular systems makes it easier to build large Web<br />

server farms with maximum manageability and deployment flexibility.<br />

Organizations can add new capacity quickly or redeploy hardware resources as<br />

required.<br />

• Virtualization and Consolidation<br />

Virtualization and Consolidation have never been more important as organizations<br />

seek to get more from their deployed infrastructure. Modular systems based on<br />

<strong>Sun</strong>’s UltraSPARC T1 and T2 processors with CoolThreads technology can offer<br />

consolidation solutions with <strong>Sun</strong> Logical Domains and Solaris Containers that cut<br />

power and cooling costs. Modular systems based on <strong>Sun</strong>’s x64 based server<br />

modules offer up to twice the memory and I/O of competing x64 blades or<br />

rackmount servers. These systems offer enterprise-class reliability, availability, and<br />

serviceability features — providing the needed headroom for consolidation with<br />

VMware, Xen, or Microsoft Virtual Server.<br />

• High Performance Computing (HPC)<br />

Commercial and scientific computational applications such as electronic design<br />

automation (EDA) and mechanical computer aided engineering (MCAE) place<br />

significant demands on system architecture. These applications require a<br />

combination of computational performance and system capacity, with exacting<br />

needs integer and floating point performance, large memory configurations, and<br />

flexible I/O. <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems based on <strong>Sun</strong>’s x64 based<br />

server modules combined with the <strong>Sun</strong> Refresh Service allow organizations to<br />

purchase the highest-performing and most cost-effective platforms now, while<br />

maintaining that technological edge for years to come.


13 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

• Terascale and Petascale Supercomputing Clusters and Grids<br />

The largest supercomputing clusters in the world are needed to push back the<br />

fundamental limits of understanding in key scientific and engineering endeavors.<br />

The <strong>Sun</strong> Constellation System serves these institutions as the world’s first open<br />

petascale computing environment, combining ultra-dense high-performance<br />

computing, networking, storage, and software into an integrated system. The <strong>Sun</strong><br />

Constellation System delivers massive scalability — from teraflops to petaflops —<br />

while offering dramatically reduced complexity and breakthrough economics.<br />

Components of the <strong>Sun</strong> Constellation System include:<br />

– The <strong>Sun</strong> Datacenter Switch 3456, the world’s largest InfiniBand core switch with<br />

capacity for 3,456 server nodes (and up to 13,824 server nodes with multiple<br />

core switches)<br />

– The <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system, for high-density compute nodes with integral<br />

InfiniBand switched NEM<br />

– <strong>Sun</strong> Fire X4500 server clusters and the <strong>Sun</strong> StorageTek 5800 system, providing<br />

massively scalable and cost-effective storage solutions.<br />

– A comprehensive HPC software stack to manage and augment the worlds largest<br />

supercomputing clusters and grids.<br />

<strong>Sun</strong> Constellation System components are shown in Figure 2.<br />

Figure 2. The <strong>Sun</strong> Constellation System can be used to build the largest terascale and petascale<br />

supercomputing clusters and grids<br />

<strong>Chassis</strong> Front Perspectives<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> chassis house the server modules and I/O modules,<br />

connecting the two through the passive midplane. Redundant and hot-swappable<br />

power supplies and fans are also hosted in the chassis. All slots are accessible from


14 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

either the front or the rear of the chassis for easy serviceability. Server modules, I/O<br />

modules, power supplies, and fans can all be added and removed while the chassis and<br />

other elements in the enclosure are powered on. This capability yields great expansion<br />

opportunity and provides considerable flexibility. The front perspectives of the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 chassis and a single <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf are shown in Figure 3, with<br />

components described in the sections that follow.<br />

Hot-swappable N+N<br />

power supply modules<br />

with integral fans<br />

<strong>Sun</strong> <strong>Blade</strong> 6000<br />

server modules<br />

Figure 3. Front view of the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis (left) and a single <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf (right)<br />

Operator Panel<br />

An operator panel is located at the top of the chassis, providing status on the overall<br />

condition of the system. Indicators show if the chassis is on standby or operational<br />

mode, and if an over-temperature condition is occurring. A push-button indicator acts<br />

as a locator button for the chassis in case there is a need to remotely identify a chassis<br />

within a rack, or in a crowded datacenter. If any of the components in the chassis<br />

should present a problem or a failure, the operator panel reflects that issue as well.<br />

Power Supply Modules and Front Fan Modules<br />

Two power supply modules load from the front of the chassis or shelf. Each module<br />

contains multiple power supplies cores enclosed within a single unit (two for the <strong>Sun</strong><br />

<strong>Blade</strong> 6000, and three for the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> power supply modules), and each module<br />

requires a corresponding number of power inlets. Power supply modules are hot swap<br />

capable and contain a replaceable fan module that helps cool both the power supplies<br />

as well as the PCI Express modules in the rear of the enclosure. In case of a power<br />

supply failure, the integral fan modules will continue to function because they are<br />

actually energized directly from the chassis power grid, independently from the power<br />

supply modules that contain them.<br />

The power supply modules provide the total power required by the chassis (or shelf).<br />

The power supply modules can be configured redundantly in an N+N configuration,<br />

with a single power supply module able to power the entire chassis at full load. In order


15 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

to provide N+N redundancy, all power cords must be energized. If both power supply<br />

modules are energized, all of the systems in the chassis are protected from power<br />

supply failure. A power supply module can fail or be disconnected without affecting the<br />

server modules and components running inside the chassis. To further enhance this<br />

protection, power grid redundancy for all of the systems and components in the chassis<br />

can be easily achieved by connecting each of the two power supply modules to different<br />

power grids within the datacenter.<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 power supply modules have a high 90-percent efficiency rating and an<br />

output voltage of 12 V DC. The high efficiency rating indicates that there are fewer<br />

power losses within the power supply itself, therefore wasting less power in the energy<br />

conversion stage from alternating current (AC) to direct current (DC). Also, by feeding<br />

12V DC directly to the midplane, fewer conversion stages are required in the individual<br />

server modules. This strategy yields less power conversion energy waste, and generates<br />

less waste heat within the server module, making the overall system more efficient.<br />

Provisioned power for rack mounted configurations depends on the number of chassis<br />

deployed per rack. A 42U rack with four installed <strong>Sun</strong> <strong>Blade</strong> 6000 chassis requires 24<br />

kilowatts, while a 38U rack with three chassis requires 18 kilowatts. Depending on the<br />

ongoing load of the systems, actual power consumption will vary. For a more in-depth<br />

analysis of day-to-day power consumption of the system please visit the power<br />

calculator located on the <strong>Sun</strong> Website at http://www.sun.com/blades.<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> power supply modules include three power supply cores, facilitating<br />

adjustable power utilization depending on the power consumption profiles of the<br />

installed server modules and other components. Two or three cores can be energized in<br />

each power supply module to make the system perform at optimal efficiency. An on-line<br />

power calculator (www.sun.com/servers/blades/<strong>6048</strong>chassis/calc) can help identify<br />

the power envelope of each shelf, and can help determine how many power supply<br />

cores to energize. Energizing two cores will support 5,600 Watts, and energizing three<br />

cores will support 8,400 Watts per shelf.<br />

Server Modules<br />

Up to 10 <strong>Sun</strong> <strong>Blade</strong> 6000 server modules can be inserted vertically beneath the power<br />

supply modules on the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis. The <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> chassis supports up<br />

to 12 <strong>Sun</strong> <strong>Blade</strong> 6000 server modules per shelf, or 48 server modules per chassis. The<br />

four hard disk drives on each server module are available for easy hot-swap from the<br />

front of the chassis. Indicator LEDs and I/O ports are also provided on the front of the<br />

server modules for easy access. A number of connectors are provided on the front panel<br />

of each server module, available through a server module adaptor (“octopus cable”).<br />

Depending on the server module, available ports include a VGA HD-15 monitor port, two<br />

USB 2.0 ports, and a DB-9 or RJ-45 serial port that connects to the server module and<br />

integral service processors.


16 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

<strong>Chassis</strong> Rear Perspective<br />

The rear of the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis and a single <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf provide access<br />

to the back side of the passive midplane for I/O modules (Figure 4). Slots for PCI Express<br />

ExpressModules (EMs) and PCI Express Network Express Modules (NEMs) are provided.<br />

I/O modules are all hot swap capable and provide I/O capabilities to server modules.<br />

Plugs/Cords<br />

PCI Express<br />

ExpressModules<br />

PCI Express<br />

Network Express Modules<br />

N+1 Redundant<br />

and Hot-Swappable<br />

Fan Modules<br />

Figure 4. Rear view of the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis<br />

PCI Express ExpressModules (EMs)<br />

Twenty hot-plug capable PCI Express ExpressModule slots are accessible at the top of<br />

the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis, with 24 EMs supported by each <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf. EMs<br />

offer a variety of choices for communications including gigabit Ethernet, Fibre Channel,<br />

and Infiniband interconnects. Different EMs can be chosen for every server module in<br />

order to provide each with the right type of fabric connectivity with a high degree of<br />

granularity. Two PCI Express ExpressModule slots are dedicated and directly connected<br />

to each server module through the passive midplane. Slots 0 and 1 from right to left are<br />

connected to server module 0, slots 2 and 3 are connected to server module 1,<br />

continuing across the back of the chassis.<br />

PCI Express Network Express Modules<br />

Space is provided for up to two PCI Express Network Express Modules (NEMs) in the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 chassis, and in each <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf. NEMs provide the same I/O<br />

capabilities across all of the server modules installed in the chassis, simplifying<br />

connectivity and also usually offering a low-cost I/O solution since they provide I/O to<br />

all of the server modules. All the server modules are directly connected to each of the<br />

configured NEMs through PCI Express connections. Due to the different chassis widths,<br />

specific NEMS are provided to fit the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems. More<br />

details on available NEMs for both systems are provided in Chapter 3.


17 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

<strong>Chassis</strong> Monitoring Module<br />

A <strong>Chassis</strong> Monitoring Module (CMM) is located the NEM slots on the left-hand side of<br />

the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis, and to the left of the NEM slots on the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong><br />

chassis — providing remote monitoring and a central access point to the chassis. The<br />

CMM includes an integrated switch that gives LAN access to the CMM's Ethernet ports<br />

and to the individual server module management ports. Individual server module<br />

management is completely transparent and independent from the CMM. The CMM on<br />

the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system is combined with the power input module.<br />

Power Supply Inlets<br />

Four power supply inlets (plugs) are available from the rear of the <strong>Sun</strong> <strong>Blade</strong> 6000<br />

chassis, with six provided for each <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf. The number of inlets<br />

corresponds to the number of power supply cores in the two front-loaded power supply<br />

modules. Integral cable holders prevent accidental loss of power from inadvertent<br />

cable removal. Each of the cables require a 220V, 20A circuit, and a minimum of two<br />

circuits are required to power each chassis. For full N+N redundancy, four circuits are<br />

required by the <strong>Sun</strong> <strong>Blade</strong> 6000 modular system, and six circuits are required by each<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system shelf.<br />

Fans and Airflow<br />

<strong>Chassis</strong> airflow is entirely front to back in both chassis, and is powered by rear fan<br />

modules, and by the front fan modules mounted in the power supply modules. All rear<br />

fan modules are hot-swap and N+1, with six fan modules provided for each <strong>Sun</strong> <strong>Blade</strong><br />

6000 chassis, and eight fan modules provided for each <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf. Each rear<br />

fan module is comprised of two redundant in-line fans.The front fan modules pull air in<br />

from the front of the chassis and blow it across the power supplies and exhaust through<br />

the EM and NEM spaces. The rear fan modules pull air from the front of the chassis and<br />

exhaust it through the rear. When all of the fans in the chassis are running at full<br />

speed, the chassis can provide up to 1,000 cubic feet per minute (CFM) of airflow<br />

through the chassis.<br />

Passive Midplane<br />

In essence, the passive midplanes in the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems are<br />

a collection of wires and connectors between different modules in the chassis. Since<br />

there are no active components, the reliability of these printed circuit boards is<br />

extremely high — in the millions of hours, or hundreds of years. The passive midplane<br />

provides electrical connectivity between the server modules and the I/O modules.<br />

All modules, front and rear, with the exception of the power supplies and the fan<br />

modules connect directly to the passive midplane. The power supplies connect to the<br />

midplane through a bus bar and to the AC inputs via a cable harness. The redundant


18 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

fan modules plug individually into a set of three fan boards, where fan speed control<br />

and other chassis-level functions are implemented. The front fan modules that cool the<br />

PCI Express ExpressModules each connect to the chassis via blind-mate connections.<br />

The main functions of the midplane include:<br />

• Providing a mechanical connection point for all of the server modules<br />

• Providing 12 VDC from the power supplies to each customer-replaceable module<br />

• Providing 3.3 VDC power used to power the System Management Bus devices on each<br />

module, and to power the CMM<br />

• Providing a PCI Express interconnect between the PCI Express root complexes on each<br />

server module to the EMs and NEMs installed in the chassis<br />

• Connecting the server modules, CMMs, and NEMs to the chassis management<br />

network<br />

EMs<br />

PCI Express x8<br />

PCI Express x8<br />

PCI Express x4/x8 or XAUI<br />

Gigabit Ethernet<br />

SAS Links<br />

PCI Express x4/x8 or XAUI<br />

Gigabit Ethernet<br />

SAS Links<br />

Service Processor<br />

Ethernet<br />

NEM 1<br />

CMM<br />

Server Module<br />

NEM 0<br />

Figure 5. Distribution of communications links from each <strong>Sun</strong> <strong>Blade</strong> 6000 server module<br />

Each server module is energized through the midplane from the redundant chassis<br />

power grid. The midplane also provides connectivity to the I2C network in the chassis,<br />

letting each server module directly monitor the chassis environment, including fan and<br />

power supply status as well as various temperature sensors. A number of I/O links are<br />

also routed through the midplane for each server module (Figure 5), including:<br />

• Two x8 PCI Express links connect from each server module to each of the dedicated<br />

EMs<br />

• Two x4 or x8 PCI Express links connect from each server module, one to each of the<br />

NEMs<br />

• Two gigabit Ethernet links are provided, each connecting to one of the NEMs<br />

• Four x1 Serial Attached SCSI (SAS) links are also provided, with two connecting to<br />

each NEM (for future use)


19 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

Server Modules Based on <strong>Sun</strong> SPARC, Intel Xeon,<br />

and AMD Opteron Processors<br />

The ability to host demanding compute, memory, and I/O-intensive applications is<br />

ultimately dependent on the characteristics of the actual server modules. The<br />

innovative <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> chassis allow designers considerable flexibility in<br />

terms of delivering powerful server modules for a broad range of applications.<br />

Except for labeling, all <strong>Sun</strong> <strong>Blade</strong> 6000 server modules feature a physically identical<br />

front panel design. This design is intentional since any server module can be used in<br />

any slot of the chassis, no matter what the internal architecture of the server module.<br />

As mentioned, all server modules use the same midplane connectors and have<br />

equivalent I/O characteristics.<br />

A Choice of Processors, a Choice of Operating Systems<br />

By providing a choice of <strong>Sun</strong> SPARC, Intel Xeon, and AMD Opteron processors, the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 and <strong>6048</strong> modular systems can serve a wide range of applications and<br />

demands. Organizations are free to choose the platform that best suits their needs or<br />

fits in with their existing environments. Server modules of different architectures can<br />

also be mixed and matched in a single <strong>Sun</strong> <strong>Blade</strong> 6000 chassis, or within a single <strong>Sun</strong><br />

<strong>Blade</strong> <strong>6048</strong> modular system shelf.<br />

To help assure the best application performance, <strong>Sun</strong> <strong>Blade</strong> 6000 server modules<br />

provide substantial computational and memory capacity to support demanding<br />

applications. Table 2 lists the capabilities of the <strong>Sun</strong> <strong>Blade</strong> 6000 server modules<br />

including processors, cores, threads, and memory capacity.<br />

Table 2. Processor support and memory capacities for <strong>Sun</strong> <strong>Blade</strong> 6000 server modules<br />

Server Module Processor(s) Cores/Threads Memory Capacity<br />

<strong>Sun</strong> <strong>Blade</strong> T6320<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> T6300<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> X6220<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> X6250<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> X6450<br />

server module<br />

1 UltraSPARC T2<br />

processor<br />

1 UltraSPARC T1<br />

processor<br />

2 Next Generation<br />

AMD Opteron<br />

processors<br />

2 Intel Xeon<br />

Processor 5100<br />

series or 5300 series<br />

CPUs<br />

4 Intel Xeon<br />

Processor 7200<br />

series or 7300 series<br />

CPUs<br />

• 4, 6, or 8 cores,<br />

up to 64 threads<br />

• 6 or 8 cores,<br />

up to 32 threads<br />

• 4 cores,<br />

4 threads<br />

• 5100 series:<br />

4 cores,<br />

4 threads<br />

• 5300 series:<br />

8 cores,<br />

8 threads<br />

• 7200 series:<br />

8 cores,<br />

8 threads<br />

• 7300 series:<br />

16 cores,<br />

16 threads<br />

Up to 64 GB,<br />

16 FBDIMM slots<br />

Up to 32 GB,<br />

8 DIMM slots<br />

Up to 64 GB,<br />

16 DIMM slots<br />

Up to 64 GB,<br />

16 FB-DIMM slots<br />

Up to 96 GB,<br />

24 FB-DIMM slots


20 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

Leading I/O Throughput<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 server modules provide extensive I/O capabilities and a wealth of I/O<br />

options, allowing modular servers to be used for applications that require significant<br />

I/O throughput:<br />

• Up to 142 Gbps of I/O throughput is provided on each <strong>Sun</strong> <strong>Blade</strong> 6000 server module,<br />

delivered through 32 lanes of PCI Express I/O, as well as multiple gigabit Ethernet<br />

and SAS links. Each server module delivers its I/O to the passive midplane and the I/O<br />

devices connected to it in the <strong>Sun</strong> <strong>Blade</strong> 6000 chassis or <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf.<br />

• Four 2.5-inch SAS or SATA disk drives are supported in each server module (PCI-based).<br />

• Two hot-plug PCI Express ExpressModules (EM) slots are dedicated to each server<br />

module (20 per chassis) for granular blade I/O configuration.<br />

• Network Express Modules (NEMs) provide bulk I/O across multiple server modules<br />

and aggregate I/O functions. <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems supply up to<br />

two NEMs, each with a PCI Express x8 or XAUI connection, gigabit Ethernet<br />

connection, and two SAS link connections to each server module.<br />

Table 3 lists the throughput provided through the passive midplane from each of the<br />

three server modules.<br />

Table 3. Midplane throughput for <strong>Sun</strong> <strong>Blade</strong> 6000 server modules<br />

Links<br />

<strong>Sun</strong> <strong>Blade</strong> T6320<br />

server module a<br />

(links, Gbps)<br />

<strong>Sun</strong> <strong>Blade</strong> T6300<br />

server module<br />

(links, Gbps)<br />

<strong>Sun</strong> <strong>Blade</strong> x6220<br />

server module<br />

(links, Gbps)<br />

<strong>Sun</strong> <strong>Blade</strong> X6250<br />

server module a<br />

(links, Gbps)<br />

<strong>Sun</strong> <strong>Blade</strong> X6450<br />

server module a<br />

(links, Gbps)<br />

PCI Express links to EMs<br />

2 x8 links,<br />

32 Gbps each<br />

2 x8 links,<br />

32 Gbps each<br />

2 x8 links,<br />

32 Gbps each<br />

2 x8 links,<br />

32 Gbps each<br />

2 x8 links,<br />

32 Gbps each<br />

PCI Express Links to NEMs<br />

2 x4 links,<br />

16 Gbps each<br />

2 x8 links,<br />

16 Gbps each<br />

2 x8 links,<br />

32 Gbps each<br />

2 x4 links,<br />

16 Gbps each<br />

2 x4 links,<br />

16 Gbps each<br />

Gigabit Ethernet links 2, 1 Gbps each 2, 1 Gbps each 2, 1 Gbps each 2, 1 Gbps each 2, 1 Gbps each<br />

SAS links 4, 3 Gbps each 4, 3 Gbps each 4, 3 Gbps each 4, 3 Gbps each 4, 3 Gbps each<br />

Total server module bandwidth 142 Gbps 142 Gbps 142 Gbps 110 Gbps 110 Gbps<br />

a.Server modules with Raid Expansion Module (REM) and Fabric Expansion Modules (FEM)<br />

Enterprise-Class Features<br />

Unlike most traditional blade servers, <strong>Sun</strong> <strong>Blade</strong> 6000 server modules provide a host of<br />

enterprise features that help ensure greater reliability and availability:<br />

• Each server module supports hot-plug capabilities<br />

• Each server module supports four hot-plug disks, and built-in support for RAID 0 or 1<br />

(diskless operation is also supported) 1<br />

• Redundant hot-swap chassis-located fans mean greater reliability through decreased<br />

part count and no fans located on the server modules<br />

• Redundant hot-swap chassis-located power supply modules mean that no power<br />

supplies are located on individual server modules<br />

1.Raid 0, 1, 5, and RAID 0+1 are supported by the <strong>Sun</strong> <strong>Blade</strong> X6250 and X6450 server modules with the<br />

<strong>Sun</strong> StorageTek RAID expansion module (REM)


21 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

Open Transparent Management<br />

Together, <strong>Sun</strong> <strong>Blade</strong> 6000 server modules and <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular<br />

systems provide a robust and comprehensive list of management features, including:<br />

• A dedicated service processor on each server module for blade-level management<br />

granularity<br />

• A <strong>Chassis</strong> Monitoring Module (CMM) for direct access to server module management<br />

features<br />

• <strong>Sun</strong> xVM Ops Center for server module discovery and OS provisioning as well as bulk<br />

application-level provisioning<br />

A Choice of Operating Systems<br />

In order to provide maximum flexibility and investment protection, the <strong>Sun</strong> <strong>Blade</strong> 6000<br />

server modules support a choice of operating systems, including:<br />

• Solaris 10 OS<br />

• The Linux operating system (64-bit Red Hat or SuSE Linux)<br />

• Microsoft Windows<br />

• VMware ESX Server<br />

Table 4 lists the specific operating system versions supported by the <strong>Sun</strong> <strong>Blade</strong> 6000<br />

server modules as of this writing. Please see www.sun.com/servers/blades/6000 for the<br />

latest supported operating systems and environments.<br />

Table 4. Processor and memory capacities for supported server modules<br />

Server Module<br />

<strong>Sun</strong> <strong>Blade</strong> T6320<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> T6300<br />

server module<br />

<strong>Sun</strong> <strong>Blade</strong> X6220, X6250,<br />

and X6450 server modules<br />

Supported Operating Systems<br />

Solaris 10 OS Update 4 with patches or later<br />

Solaris 10 OS Update 3 with patches or later<br />

Solaris 10 11/06 OS on x64, HW2 64-bit<br />

Red Hat Enterprise Linux Advanced Server 4, U4 and U5, 32-bit<br />

SuSE Linux Enterprise Server 10, 32-bit<br />

VMware ESX 3.0.2 and 3.5<br />

Microsoft Windows Server 2003 R2:<br />

• Standard Edition 32- and 64-bit<br />

• Enterprise Edition, 32- and 64-bit<br />

Microsoft Windows Server 2008<br />

Solaris OS Support on all Server Modules<br />

Among the available operating systems, the Solaris OS is ideal for large-scale enterprise<br />

deployments. Supported on all <strong>Sun</strong> <strong>Blade</strong> 6000 server modules, the Solaris OS has<br />

specific features that can enhance flexibility and performance — with different features<br />

affecting different processors as noted.


22 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

• <strong>Sun</strong> Logical Domains Support in <strong>Sun</strong> <strong>Blade</strong> T6320 and T6300 Server Modules<br />

Supported in all <strong>Sun</strong> servers that utilize <strong>Sun</strong> processors with chip multithreading<br />

(CMT) technology, <strong>Sun</strong> Logical Domains provide a full virtual machine that runs an<br />

independent operating system instance and contains virtualized CPU, memory,<br />

storage, console, and cryptographic devices. Within the <strong>Sun</strong> Logical Domains<br />

architecture, a small firmware layer known as the Hypervisor provides a stable,<br />

virtualized machine architecture to which an operating system can be written. As<br />

such, each logical domain is completely isolated, and the maximum number of<br />

virtual machines created on a single platform relies upon the capabilities of the<br />

Hypervisor as opposed to the number of physical hardware devices installed in the<br />

system. For example, the <strong>Sun</strong> <strong>Blade</strong> T6320 server with a single <strong>Sun</strong> UltraSPARC T2<br />

processor supports up to 64 logical domains, and each individual logical domain<br />

can run a unique instance of the operating system 1 .<br />

By taking advantage of <strong>Sun</strong> Logical Domains, organizations gain the flexibility to<br />

deploy multiple operating systems simultaneously on a single server module. In<br />

addition, administrators can leverage virtual device capabilities to transport an<br />

entire software stack hosted on a logical domain from one physical machine to<br />

another. Logical domains can also host Solaris Containers to capture the isolation,<br />

flexibility, and manageability features of both technologies. By deeply integrating<br />

logical domains with both the industry-leading CMT capabilities of the UltraSPARC<br />

T1 and T2 processors and the Solaris 10 OS, <strong>Sun</strong> Logical Domains technology<br />

increases flexibility, isolates workload processing, and improves the potential for<br />

maximum server utilization.<br />

• Scalability and Support for CoolThreads Technology<br />

The Solaris 10 OS is specifically designed to deliver the considerable resources of<br />

UltraSPARC T1 and T2 processor-based systems such as the <strong>Sun</strong> <strong>Blade</strong> T6320 and<br />

T6300 server modules. In fact, the Solaris 10 OS provides new functionality for<br />

optimal utilization, availability, security, and performance of these systems:<br />

– CMT awareness — The Solaris 10 OS is aware of the UltraSPARC T1 and T2 processor<br />

hierarchies so that the scheduler can effectively balance the load across<br />

all the available pipelines. For instance, even though it exposes the UltraSPARC<br />

T2 processor as 64 logical processors, the Solaris OS understands the correlation<br />

between cores and the threads they support.<br />

– Fine-granularity manageability — The Solaris 10 OS has the ability to enable or<br />

disable individual processors and threads. In the case of the UltraSPARC T1 and<br />

T2 processors, this ability extends to enabling or disabling individual cores and<br />

logical processors (hardware thread contexts). In addition, standard Solaris OS<br />

features such as processor sets provide the ability to define a group of logical<br />

processors and schedule processes or threads on them.<br />

1.Though technically possible, this practice is not generally recommended


23 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

– Binding interfaces — The Solaris OS allows considerable flexibility in that processes<br />

and individual threads can be bound to either a processor or a processor<br />

set, if required or desired.<br />

– Support for Virtualized Networking and I/O, and Accelerated Cryptography —<br />

The Solaris OS contains technology to support and virtualize components and<br />

subsystems on the UltraSPARC T2 processor, including support for the dual onchip<br />

10 Gb Ethernet ports and PCI Express interface. As a part of a high-performance<br />

network architecture, CMT-aware device drivers are provided so that<br />

applications running within virtualization frameworks can effectively share I/O<br />

and network devices. Accelerated cryptography is supported through the Solaris<br />

Cryptographic framework.<br />

• Solaris Containers for Consolidation, Secure Partitioning, and Virtualization<br />

Solaris Containers comprise a group of technologies that work together to<br />

efficiently manage system resources, virtualize the system, and provide a<br />

complete, isolated, and secure runtime environment for applications. Solaris<br />

Containers can be used to partition and allocate the considerable computational<br />

resources of the <strong>Sun</strong> <strong>Blade</strong> server modules. Solaris Zones and Solaris Resource<br />

Management work together with the Solaris fair-share scheduler on both SPARCand<br />

x64-based server modules.<br />

– Solaris Zones — Solaris Zones can be used to create an isolated and secure environment<br />

for running applications. A zone is a virtualized operating system environment<br />

created within a single instance of the Solaris OS. Zones can be used to<br />

isolate applications and processes from the rest of the system. This isolation<br />

helps enhance security and reliability since processes in one zone are prevented<br />

from interfering with processes running in another zone.<br />

– Resource Management — Resource management tools provided with the<br />

Solaris OS lets administrators dedicate resources such as CPU cycles to specific<br />

applications. CPUs in a multicore multiprocessor system — such those provided<br />

by <strong>Sun</strong> <strong>Blade</strong> server modules — can be logically partitioned into processor sets<br />

and bound to a resource pool, and can ultimately be assigned to a Solaris zone.<br />

Resource pools provide the capability to separate workloads so that consumption<br />

of CPU resources does not overlap. Resource pools also provide a persistent<br />

configuration mechanism for processor sets and scheduling class assignment. In<br />

addition, the dynamic features of resource pools let administrators adjust system<br />

resources in response to changing workload demands.<br />

• Solaris Dynamic Tracing (DTrace) to Instrument and Tune Live Software Environments<br />

When production systems exhibit nonfatal errors or sub-par performance, the<br />

sheer complexity of modern distributed software environments can make accurate<br />

root-cause diagnosis extremely difficult. Unfortunately, most traditional<br />

approaches to solving this problem have proved time-consuming and inadequate,<br />

leaving many applications languishing far from their potential performance levels.


24 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

The Solaris DTrace facility on both SPARC and x64 platforms provides dynamic<br />

instrumentation and tracing for both application and kernel activities — even<br />

allowing tracing of application components running in a Java Virtual Machine<br />

(JVM) 1 . DTrace lets developers and administrators explore the entire system to<br />

understand how it works, track down performance problems across many layers of<br />

software, or locate the cause of aberrant behavior. Tracing is accomplished by<br />

dynamically modifying the operating system kernel to record additional data at<br />

locations of interest. Best of all, although DTrace is always available and ready to<br />

use, it has no impact on system performance when not in use, making it<br />

particularly effective for monitoring and analyzing production systems.<br />

• NUMA Optimization in the Solaris OS<br />

With memory managed by each processor on <strong>Sun</strong> <strong>Blade</strong> X6220 server modules,<br />

the implementation represents a non-uniform memory access (NUMA)<br />

architecture. Namely, the speed needed for a processor to access its own memory<br />

is slightly different than that required to access memory managed by another<br />

processor. The Solaris OS provides technology that can specifically help<br />

applications improve performance on NUMA architectures.<br />

– Memory Placement Optimization (MPO) — The Solaris 10 OS uses MPO to<br />

improve the placement of memory across the physical memory of a server,<br />

resulting in increased performance. Through MPO, the Solaris 10 OS works to<br />

help ensure that memory is as close as possible to the processors that access it,<br />

while still maintaining enough balance within the system. As a result, many<br />

database and HPC applications are able to run considerably faster with MPO.<br />

– Hierarchical lgroup support (HLS) — HLS improves the MPO feature in the<br />

Solaris OS. HLS helps the Solaris OS optimize performance for systems with<br />

more complex memory latency hierarchies. HLS lets the Solaris OS distinguish<br />

between the degrees of memory remoteness, allocating resources with the lowest<br />

possible latency for applications. If local resources are not available by<br />

default for a given application, HLS helps the Solaris OS allocate the nearest<br />

remote resources.<br />

• Solaris ZFS File System<br />

The Solaris ZFS file system offers a dramatic advance in data management,<br />

automating and consolidating complicated storage administration concepts and<br />

providing unlimited scalability with the world’s first 128-bit file system. ZFS is<br />

based on a transactional object model that removes most of the traditional<br />

constraints on I/O issue order, resulting in dramatic performance gains. ZFS also<br />

provides data integrity, protecting all data with 64-bit checksums that detect and<br />

correct silent data corruption.<br />

1.The terms "Java Virtual Machine" and "JVM" mean a Virtual Machine for the Java platform.


25 <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems Overview <strong>Sun</strong> Microsystems, Inc.<br />

• A Secure and Robust Enterprise-Class Environment<br />

Best of all, the Solaris OS doesn’t require arbitrary sacrifices. The Solaris Binary<br />

Compatibility Guarantee helps ensure that existing SPARC applications continue to<br />

run unchanged on UltraSPARC T1 and T2 platforms, protecting investments.<br />

Certified multi-level security protects Solaris environments from intrusion. <strong>Sun</strong>’s<br />

comprehensive Fault Management Architecture means that elements such as<br />

Solaris Predictive Self Healing can communicate directly with the hardware to<br />

help reduce both planned and unplanned downtime.


26 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Chapter 3<br />

Server Module Architecture<br />

The <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems provide high performance, capacity, and<br />

massive levels of I/O through full featured interfaces that use the latest technology and<br />

make the most of innovative chassis design. <strong>Sun</strong> <strong>Blade</strong> T6320, T6300, X6220, and X6250<br />

server modules are described in this chapter, while PCI Express ExpressModules (EMs),<br />

PCI Express Network Express Modules (NEMs), and the <strong>Chassis</strong> Monitoring Module<br />

(CMM) are described in Chapter 4.<br />

<strong>Sun</strong> <strong>Blade</strong> T6320 Server Module<br />

Successful <strong>Sun</strong> Fire / <strong>Sun</strong> SPARC Enterprise T1000 and T2000 servers and the <strong>Sun</strong> <strong>Blade</strong><br />

T6300 server module powered by the breakthrough innovation of the UltraSPARC T1<br />

processor completely changed the equation on space, power, and cooling in the<br />

datacenter. With the advent of the UltraSPARC T2 processor, the <strong>Sun</strong> <strong>Blade</strong> T6320 server<br />

module takes chip multithreading performance, density, and energy efficiency to the<br />

next level. Similar in capabilities to <strong>Sun</strong> SPARC Enterprise T5120 and T5220 servers, the<br />

physical layout of the <strong>Sun</strong> <strong>Blade</strong> T6300 server module is shown in Figure 9.<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

Midplane<br />

Connector<br />

Fabric Expansion<br />

Module (FEM)<br />

16 FBDIMM<br />

Sockets<br />

UltraSPARC T2<br />

Processor<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

ILOM 2.0 Service<br />

Processor Card<br />

RAID Expansion<br />

Module (REM)<br />

Figure 6. The <strong>Sun</strong> <strong>Blade</strong> T6320 server module with key features called out<br />

With support for up to 64 threads and considerable network and I/O capacity, the <strong>Sun</strong><br />

<strong>Blade</strong> T6320 server module virtually doubles the throughput of earlier <strong>Sun</strong> <strong>Blade</strong> T6300<br />

server modules. In addition to its processing and memory density, each server module<br />

hosts additional modules including an ILOM 2.0 service processor, fabric expansion<br />

module (FEM), and RAID expansion module (REM), all while retaining its compact form<br />

factor. With the <strong>Sun</strong> <strong>Blade</strong> T6320 server module, a single <strong>Sun</strong> <strong>Blade</strong> 6000 chassis can<br />

support up to 640 threads in just 10 rack units, and up to 3,072 threads can supported<br />

in a single <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system chassis.


27 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

The UltraSPARC ® T2 Processor with CoolThreads Technology<br />

The UltraSPARC T2 processor extends <strong>Sun</strong>’s Throughput Computing initiative with an<br />

elegant and robust architecture that delivers real performance to applications.<br />

Implemented as a massively-threaded system on a chip (SoC), each UltraSPARC T2<br />

processor supports:<br />

• Up to eight cores @ 1.2 Ghz – 1.4 Ghz<br />

• Eight threads per core for a total maximum of 64 threads per processor<br />

• 4 MB L2 cache in eight banks (16-way set associative)<br />

• Four on-chip memory controllers for support of up to 16 FBDIMMs<br />

• Up to 64 GB of memory (4 GB FBDIMMs) with 60 GB/s memory bandwidth<br />

• Eight fully pipelined floating point units (1 per core)<br />

• Dual on-chip 10 Gb Ethernet interfaces<br />

• Integral PCI-Express interface<br />

In spite of its innovative new technology, the UltraSPARC T2 processor is fully SPARC v7,<br />

v8, and v9 compatible and binary compatible with earlier SPARC processors. A high-level<br />

block diagram of the UltraSPARC T2 processor is shown in Figure 7.<br />

FB DIMM FB DIMM FB DIMM FB DIMM<br />

FB DIMM FB DIMM FB DIMM FB DIMM<br />

MCU MCU MCU MCU<br />

L2$ L2$ L2$ L2$ L2$ L2$ L2$ L2$<br />

Cross Bar<br />

C0 C1 C2 C3 C4 C5 C6 C7<br />

FPU FPU FPU FPU FPU FPU FPU FPU<br />

SPU SPU SPU SPU SPU SPU SPU SPU<br />

Network<br />

Interface Unit<br />

System Interface<br />

PCIe<br />

10 Gigabit<br />

Ethernet Ports (2)<br />

x8 @ 2.0 GHz<br />

Figure 7. Block-level diagram of an eight-core UltraSPARC T2 processor<br />

The UltraSPARC T2 processor design recognizes that memory latency is truly the<br />

bottleneck to improving performance. By increasing the number of threads supported<br />

by each core, and by further increasing network bandwidth, the UltraSPARC T2<br />

processor is able provide approximately twice the throughput of the UltraSPARC T1


28 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

processor. Each UltraSPARC T2 processor provides up to eight cores, with each core able<br />

to switch between up to eight threads (64 threads per processor). In addition, each core<br />

provides two integer execution units, so that a single UltraSPARC core is capable of<br />

executing two threads at a time.<br />

The eight cores on the UltraSPARC T2 processor are interconnected with a full on-chip<br />

non-blocking 8 x 9 crossbar switch. The crossbar connects each core to the eight banks<br />

of L2 cache, and to the system interface unit for IO. The crossbar provides<br />

approximately 300 GB/second of bandwidth and supports 8-byte writes from a core to a<br />

bank and 16-byte reads from a bank to a core. The system interface unit connects<br />

networking and I/O directly to memory through the individual cache banks. Using<br />

FBDIMM memory supports dedicated northbound and southbound lanes to and from<br />

the caches to accelerate performance and reduce latency. This approach provides<br />

higher bandwidth than with DDR2 memory, with up to 42.4 GB/second of read<br />

bandwidth and 21 GB/second of write bandwidth.<br />

Each core provides its own fully-pipelined Floating Point and Graphics unit (FGU), as<br />

well as a Stream Processing Unit (SPU). The FGUs greatly enhance floating point<br />

performance over that of the UltraSPARC T1 processor, while the SPUs provide wirespeed<br />

cryptographic acceleration with over 10 popular ciphers supported, including<br />

DES, 3DES, AES, RC4, SHA-1, SHA-256, MD5, RSA to 2048 key, ECC, and CRC32.<br />

Embedding hardware cryptographic acceleration for these ciphers allows end-to-end<br />

encryption with no penalty in either performance or cost.<br />

Server Module Architecture<br />

Figure 8 provides a logical block-level diagram of the <strong>Sun</strong> <strong>Blade</strong> T6320 server module.<br />

Similar to the <strong>Sun</strong> SPARC Enterprise T5120 and T5220 rackmount servers, the <strong>Sun</strong> <strong>Blade</strong><br />

T6320 server module contains an UltraSPARC T2 processor, FB-DIMM sockets for main<br />

memory, integrated lights out manager (ILOM) service processor, and I/O subsystems.<br />

The memory configuration uses all eight of the UltraSPARC T2 processor’s memory<br />

controllers to provide better memory bandwidth, The on-chip memory controllers<br />

communicate directly to FB-DIMM memory through high-speed serial links. Up to 16<br />

667 MHz FB-DIMMs may be configured in the server module.


29 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

FB-DIMMs<br />

@667Mhz<br />

Memory<br />

UltraSPARC<br />

T2 Processor<br />

FB-DIMMs<br />

@667Mhz<br />

Memory<br />

Server Module Front Panel<br />

USB 2.0<br />

VGA HD-15<br />

RJ-45<br />

Serial ALCOM<br />

T2<br />

10 Gb Ethernet<br />

10 Gb Ethernet<br />

PCI Express x8<br />

PCI to<br />

USB<br />

PCI Express x4<br />

PCI to<br />

PCI<br />

Bridge<br />

Fabric<br />

Expansion<br />

Module<br />

PCI Express<br />

Switch<br />

PEX8548<br />

PCI Express x8 - 16Gbps<br />

PCI Express x8 - 32Gbps<br />

PCI Express x8 - 32Gbps<br />

PCI Express x4<br />

ATI<br />

Graphics<br />

RAID<br />

Expansion<br />

Module<br />

PCI Express x4 (16 Gbps) or XAUI<br />

PCI Express x4 (16 Gbps) or XAUI<br />

Motorola<br />

MPC885<br />

based<br />

ALOM SP<br />

Intel<br />

Ophir<br />

2x Gbit Ethernet<br />

SAS Links<br />

10/100Mbps<br />

Management Ethernet<br />

Passive Midplane<br />

NEM #1<br />

NEM #0<br />

EM #0<br />

NEM #0<br />

NEM #1<br />

EM #1<br />

NEM #0<br />

NEM #1<br />

CMM<br />

JUNTA<br />

FPGA<br />

Figure 8. <strong>Sun</strong> <strong>Blade</strong> T6300 server module block level diagram<br />

For I/O, the UltraSPARC T2 processor incorporates an eight-lane (x8) PCI Express port<br />

capable of operating at 4 GB/second bidirectionally. In the <strong>Sun</strong> <strong>Blade</strong> X6320 server<br />

module, this port interfaces with a PCI Express switch chip that delivers various PCI links<br />

to other parts of the server module, and to the passive midplane. Two of the PCI Express<br />

interfaces provided by the PCI Express switch are made available through PCI Express<br />

ExpressModules.<br />

The PCI Express switch also provides PCI links to other internal components, including<br />

sockets for fabric expansion modules (FEMs) and RAID expansion modules (REMs). The<br />

FEM socket allows for future expansion capabilities. The gigabit Ethernet interfaces are<br />

provided by an Intel chip connected to a x4 PCI Express interface on the PCI Express<br />

switch chip. Two gigabit Ethernet links are then routed through the midplane to the<br />

NEMs. The server module provides the logic for the gigabit Ethernet connection, while<br />

the NEM provides the physical interface.<br />

<strong>Sun</strong> <strong>Blade</strong> RAID 0/1 Expansion Module<br />

All standard <strong>Sun</strong> <strong>Blade</strong> T6320 server module configurations ship with the <strong>Sun</strong> <strong>Blade</strong><br />

<strong>Blade</strong> 0/1 RAID Expansion Module (REM). Based on the LSI SAS1068E storage controller,<br />

the <strong>Sun</strong> <strong>Blade</strong> 0/1 REM provides a total of eight hard drive interfaces or links. Four<br />

interfaces are used for the on-board hard drives which may be Serial Attached SCSI<br />

(SAS) or Serial ATA (SATA). The other four links are routed to the midplane where they<br />

interface with the NEM for future use. The REM also provides RAID 0, 1, and 0+1.


30 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Integrated Lights-Out Management (ILOM) System Controller<br />

Provided across many of <strong>Sun</strong>’s x64 servers, the Integrated Lights Out Management<br />

(ILOM) service processor acts as a system controller, facilitating remote management<br />

and administration. The service processor is fully featured and is similar in<br />

implementation to that used in other <strong>Sun</strong> modular and rackmount x64 servers. As a<br />

result, <strong>Sun</strong> <strong>Blade</strong> T6320 server modules integrate easily with existing management<br />

infrastructure.<br />

Critical to effective system management, the ILOM service processor:<br />

• Implements an IPMI 2.0 compliant services processor, providing IPMI management<br />

functions to the server's firmware, OS and applications, and to IPMI-based<br />

management tools accessing the service processor via the ILOM Ethernet<br />

management interface, giving visibility to the environmental sensors (both on the<br />

server module, and elsewhere in the chassis)<br />

• Manages inventory and environmental controls for the server, including CPUs,<br />

DIMMs, and power supplies, and provides HTTPS/CLI/SNMP access to this data<br />

• Supplies remote textual console interfaces,<br />

• Provides a means to download upgrades to all system firmware<br />

The ILOM service processor also allows the administrator to remotely manage the<br />

server, independent of the operating system running on the platform and without<br />

interfering with any system activity. ILOM can also send e-mail alerts of hardware<br />

failures and warnings, as well as other events related to each server. The ILOM circuitry<br />

runs independently from the server, using the server’s standby power. As a result, ILOM<br />

firmware and software continue to function when the server operating system goes<br />

offline, or when the server is powered off. ILOM monitors the following <strong>Sun</strong> <strong>Blade</strong> T6320<br />

server module conditions:<br />

• CPU temperature conditions<br />

• Hard drive presence<br />

• Enclosure thermal conditions<br />

• Fan speed and status<br />

• Power supply status<br />

• Voltage conditions<br />

• Solaris watchdog, boot time-outs, and automatic server restart events


31 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

<strong>Sun</strong> <strong>Blade</strong> T6300 Server Module<br />

The highly successful <strong>Sun</strong> Fire / <strong>Sun</strong> SPARC Enterprise T1000 and T2000 servers powered<br />

by the breakthrough innovation of the UltraSPARC T1 processor helped drive the fastest<br />

product ramp in <strong>Sun</strong>’s history. The <strong>Sun</strong> <strong>Blade</strong> T6300 server module combines these<br />

advantages with the density, availability, and serviceability advantages of <strong>Sun</strong>’s<br />

modular systems. The <strong>Sun</strong> <strong>Blade</strong> T6300 server module is shown in Figure 9.<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

Midplane<br />

Connector<br />

Eight DDR2 400<br />

DIMM sockets<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

UltraSPARC T1<br />

Processor<br />

Service<br />

Processor<br />

Figure 9. The <strong>Sun</strong> <strong>Blade</strong> T6300 server module with key components called out<br />

The UltraSPARC ® T1 Processor with CoolThreads Technology<br />

The UltraSPARC T1 multicore, multithreaded processor was the first chip that fully<br />

implemented <strong>Sun</strong>’s Throughput Computing initiative. Each UltraSPARC T1 processor<br />

used in <strong>Sun</strong> <strong>Blade</strong> T6300 server modules has either six, or eight cores (individual<br />

execution pipelines) all on the same chip. Each core, in turn, supports up to four<br />

hardware thread contexts, a set of registers that represent the thread's state. The<br />

processor is able to switch threads on every clock cycle in a round-robin ordered<br />

fashion, and skip threads that are stalled and waiting for a memory access.<br />

DDR-2 SDRAM DDR-2 SDRAM DDR-2 SDRAM DDR-2 SDRAM<br />

L2 cache L2 cache L2 cache L2 cache<br />

On-chip cross-bar interconnect<br />

Core Core Core Core Core Core Core Core<br />

0 1 2 3 4 5 6 7<br />

FPU<br />

System Interface<br />

Buffer Switch Core<br />

UltraSPARC T1 Processor<br />

Bus<br />

Figure 10. Block-level diagram of an eight-core UltraSPARC T1 processor


32 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

As shown in Figure 10, the individual processor cores are connected by a high-speed,<br />

low-latency crossbar interconnect implemented on the silicon itself. The UltraSPARC T1<br />

processor includes very fast interconnects between the processor, cores, memory, and<br />

system resources, including:<br />

• A 134 GB/second crossbar switch that connects all cores<br />

• A JBus interface with a 3.1 GB/second peak effective bandwidth<br />

• Four DDR2 channels (25.6 GB/second total) for faster access to memory<br />

The memory subsystem of the UltraSPARC T1 processor is implemented as follows:<br />

• Each core has an Instruction cache, a Data cache, an Instruction TLB, and a Data TLB,<br />

shared by the four thread contexts. Each UltraSPARC T1 processor has a twelve-way<br />

associative unified Level 2 (L2) on-chip cache, and each hardware thread context<br />

shares the entire L2 cache.<br />

• This design results in unified memory latency from all cores (Unified Memory Access,<br />

UMA, not Non-Uniform Memory Access, NUMA).<br />

• Memory is located close to processor resources, and four memory controllers provide<br />

very high bandwidth to memory, with a theoretical maximum of 25GB per second.<br />

• Extensive built-in RAS features include ECC protection of register files, Extended-ECC<br />

(similar to IBM’s Chipkill feature), memory sparing, soft error rates and rate<br />

detection, and extensive parity/retry protection of caches.<br />

Each core has a Modular Arithmetic Unit (MAU) that supports modular multiplication<br />

and exponentiation to help accelerate Secure Sockets Layer (SSL) processing. There is a<br />

single Floating Point Unit (FPU) shared by all cores, thus the UltraSPARC T1 processor is<br />

generally not an optimal choice for applications with floating point intensive<br />

requirements.<br />

Server Module Architecture<br />

Figure 11 provides a logical block-level diagram of the <strong>Sun</strong> <strong>Blade</strong> T6300 server module.<br />

Similar in design to the <strong>Sun</strong> SPARC Enterprise T2000 server, the memory configuration<br />

uses all four of the processor’s memory controllers to provide better memory<br />

bandwidth, and up to eight DDR2 533 DIMMs may be configured in the server module.<br />

As in other UltraSPARC T1 based systems, the actual memory speed is 400 MHz.


33 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

DDR2 533<br />

@400Mhz<br />

Memory<br />

UltraSPARC<br />

T1 Processor<br />

3.2<br />

GB/sec<br />

3.2<br />

GB/sec<br />

<strong>Blade</strong> Module Front Panel<br />

DB-9<br />

Serial Posix<br />

USB 2.0<br />

RJ-45<br />

Serial ALCOM<br />

3.2<br />

GB/sec<br />

3.2<br />

GB/sec<br />

JBUS<br />

Fire<br />

E Bus<br />

Fire<br />

Chip<br />

UART<br />

PCIe x8<br />

PCIe x8<br />

PCI to<br />

USB<br />

PCIe<br />

JUNTA<br />

FPGA<br />

PCI Express<br />

Bridge<br />

PCI Express<br />

Bridge<br />

PCI to<br />

PCI<br />

Bridge<br />

PCIe x4 - 16Gbps<br />

PCIe x4<br />

LSI<br />

SAS 1068e<br />

LSI LOGIC<br />

Motorola<br />

MPC885<br />

based<br />

ALOM SP<br />

PCIe x8 - 32Gbps<br />

PCIe x8 - 32Gbps<br />

Intel<br />

Ophir<br />

PCIe x8 - 32Gbps<br />

PCIe x8 - 32Gbps<br />

2x Gbit Ethernet<br />

SAS Links<br />

10/100Mbps<br />

Management Ethernet<br />

Passive Midplane<br />

EM #0<br />

NEM #0<br />

NEM #0<br />

NEM #1<br />

EM #1<br />

NEM #1<br />

NEM #0<br />

NEM #1<br />

CMM<br />

Figure 11. <strong>Sun</strong> <strong>Blade</strong> T6300 server module block level diagram<br />

For I/O, two PCI Express bridges are used to obtain the four x8 PCI Express interfaces<br />

that communicate directly to the Fire Chip that directs I/O through a pair of PCI Express<br />

bridges. Two of the PCI Express interfaces provided by the PCI Express bridges are made<br />

available through PCI Express ExpressModules, and the other two interfaces are<br />

connected to PCI Express Network Express Modules.<br />

For storage, an LSI SAS1068e controller is included on the server module, providing<br />

eight hard drive interfaces or links. Four interfaces are used for the on-board hard drives<br />

which may be Serial Attached SCSI (SAS) or Serial ATA (SATA). The other four links are<br />

routed to the midplane where they interface with the NEM slots for future use. The<br />

storage controller is capable of RAID 0 or 1 and up to two volumes are supported in<br />

RAID configurations.<br />

The gigabit Ethernet interfaces are provided by an Intel chip connected to a x4 PCI<br />

Express interface on one of the bridges. Two gigabit Ethernet links are then routed<br />

through the midplane to the NEMs. The server module provides the logic for the gigabit<br />

Ethernet connection, while the NEM provides the physical interface.<br />

The ALOM Service Processor<br />

The remote management capabilities of the <strong>Sun</strong> <strong>Blade</strong> T6300 server module are a<br />

complete implementation of the Advanced Lights Out Manager (ALOM). The ALOM<br />

service processor allows the <strong>Sun</strong> <strong>Blade</strong> T6300 server module to be remotely managed<br />

and administered identically to <strong>Sun</strong> Fire / SPARC Enterprise T1000 and T2000 servers.


34 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

ALOM allows the administrator to monitor and control a server, either over a network<br />

or by using a dedicated serial port for connection to a terminal or terminal server.<br />

ALOM provides a command-line interface that can be used to remotely administer<br />

geographically-distributed or physically-inaccessible machines. In addition, ALOM<br />

allows administrators to run diagnostics remotely (such as power-on self-test) that<br />

would otherwise require physical proximity to the server serial port. ALOM can also be<br />

configured to send email alerts of hardware failures, hardware warnings, and other<br />

events related to the server or to ALOM.<br />

The ALOM circuitry runs independently of the server, using the server’s standby power.<br />

As a result, ALOM firmware and software continue to function when the server<br />

operating system goes offline or when the server is powered off. ALOM monitors disk<br />

drives, fans, CPUs, power supplies, system enclosure temperature, voltages, and the<br />

server front panel, so that the administrator does not have to.<br />

ALOM specifically monitors the following <strong>Sun</strong> <strong>Blade</strong> T6300 server module components:<br />

• CPU temperature conditions<br />

• Enclosure thermal conditions<br />

• Fan speed and status<br />

• Power supply status<br />

• Voltage thresholds<br />

<strong>Sun</strong> <strong>Blade</strong> X6220 Server Module<br />

The <strong>Sun</strong> <strong>Blade</strong> X6220 server module provides a two-socket x64-based platform with<br />

significant computational, memory, and I/O density. The result is a compact, efficient,<br />

and flexible package with leading floating-point performance for demanding<br />

applications such as HPC. The physical layout of the <strong>Sun</strong> <strong>Blade</strong> X6220 server module is<br />

illustrated in Figure 12.<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

Midplane<br />

Connector<br />

16 DDR2 667<br />

DIMM sockets<br />

AMD Opteron<br />

Processors<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

Service<br />

Processor<br />

Figure 12. The <strong>Sun</strong> <strong>Blade</strong> X6220 server module with key components called out


35 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Second Generation AMD Opteron Series 2000 Processors<br />

The <strong>Sun</strong> <strong>Blade</strong> X6220 server module is based on the Second Generation AMD Opteron<br />

2000 Series processor, leveraging AMD’s Direct Connect Architecture and the nVidia<br />

2200 Professional chipset for scalability and fast I/O throughput. The <strong>Sun</strong> <strong>Blade</strong> X6220<br />

server module will initially support dual-core AMD Opteron processors, and will support<br />

AMD’s future processors as they become available. The <strong>Sun</strong> <strong>Blade</strong> 6000 chassis provides<br />

sufficient airflow for the server modules to be configured with any type of AMD Opteron<br />

processor, including the Special Edition (SE) versions that consume more power but<br />

provide greater clock speed.<br />

The AMD Opteron processor extends the ubiquitous x86 architecture to accommodate<br />

x64 64-bit processing. Formerly known as x86-64, AMD’s enhancements to the x86<br />

architecture allow seamless migration to the superior performance of x64 64-bit<br />

technology. The AMD Opteron processor (Figure 13) was designed from the start for<br />

dual-core functionality, with a crossbar switch and system request interface. This<br />

approach defines a new class of computing by combining full x86 compatibility, a highperformance<br />

64-bit architecture, and the economics of an industry-standard processor.<br />

Second-Generation Dual-Core AMD Opteron<br />

Core 1 Core 2<br />

128 KB L1 Cache 128 KB L1 Cache<br />

1MB L2 Cache<br />

1MB L2 Cache<br />

System Request Interface<br />

Crossbar Switch<br />

DDR2<br />

Memory<br />

Controller<br />

HyperTransport 0<br />

HyperTransport 1<br />

HyperTransport 2<br />

Figure 13. High-level architectural perspective of a dual-core AMD Opteron processor<br />

Enhancements of the AMD Opteron processor over the legacy x86 architecture include:<br />

• 16 64-bit general-purpose integer registers that quadruple the general-purpose<br />

register space available to applications and device drivers as compared to x86<br />

systems<br />

• 16 128-bit XMM registers provide enhanced multimedia performance to double the<br />

register space of any current SSE/SSE2 implementation<br />

• A full 64-bit virtual address space offers 40 bits of physical memory addressing and 48<br />

bits of virtual addressing that can support systems with up to 256 terabytes of<br />

physical memory<br />

• Support for 64-bit operating systems provide full transparent, and simultaneous 32-<br />

bit and 64-bit platform application multitasking<br />

• A 128-bit wide, on-chip DDR memory controller supports ECC and Enhanced ECC and<br />

provides low-latency memory bandwidth that scales as processors are added


PCI<br />

PCIe x4<br />

36 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Each processor core has a dedicated 1MB Level-2 cache, and both cores use the System<br />

Request Interface and Crossbar Switch to share the Memory Controller and access the<br />

three HyperTransport links. This sharing represents an effective approach since<br />

performance characterizations of single-core based systems have revealed that the<br />

memory and HyperTransport bandwidths are typically under-utilized, even while<br />

running high-end server workloads.<br />

The AMD Opteron processor with integrated HyperTransport technology links provides a<br />

scalable bandwidth interconnect among processors, I/O subsystems, and other chipsets.<br />

HyperTransport technology interconnects help increase overall system<br />

performance by removing I/O bottlenecks and efficiently integrating with legacy buses,<br />

increasing bandwidth and speed, and reducing processor latency. At 16 x 16 bits and 1<br />

GHz operation, HyperTransport technology provides support for up to 8 GB/s bandwidth<br />

per link.<br />

Server Module Architecture<br />

As shown in Figure 14, the AMD Opteron processor uses DDR2 memory, running at a<br />

faster memory bus clock rate of 667 MHz. Up to 10.7 GB per second of memory<br />

bandwidth is provided for each memory controller, for a total aggregate memory<br />

bandwidth of 21.4 GB per second. These higher clock rates can be sustained, even when<br />

the CPUs are configured with up to four DDR2 DIMMs. When all eight DIMMs are<br />

populated, the clock speed is dropped to 533 MHz. The total memory capacity available<br />

is 64 GB per server module.<br />

DDR2 667<br />

Memory<br />

10.7<br />

GB/sec<br />

8 GB/s<br />

IO-04<br />

PCIe Bridge<br />

PCIe x8 - 32Gbps<br />

PCIe x8 - 32Gbps<br />

EM #1<br />

NEM #1<br />

Next Generation<br />

AMD Opteron 2000<br />

Series Processors<br />

<strong>Blade</strong> Module Front Panel<br />

(Via adapter cable)<br />

USB 2.0<br />

VGA HD-15<br />

DB-9 Serial<br />

10.7<br />

GB/sec<br />

8 GB/s<br />

VGA Video<br />

Output<br />

nForce4<br />

CK8-04<br />

RageXL<br />

DVI Video<br />

Output<br />

3 USB 2.0 Ports - Remote KMS<br />

LSI<br />

SA S1068e<br />

LSI LOGIC<br />

Video<br />

over LAN<br />

Redirect<br />

Super I/O<br />

Controller<br />

Gbit Ethernet<br />

Gbit Ethernet<br />

PCIe x8 - 32Gbps<br />

PCIe x8 - 32Gbps<br />

IDE<br />

LPC 33MHz<br />

Motorola<br />

MPC8275<br />

SP<br />

BCM<br />

Compact<br />

Flash<br />

SAS Links<br />

10/100Mbps<br />

Management<br />

Ethernet<br />

Passive Midplane<br />

NEM #1<br />

NEM #0<br />

EM #0<br />

NEM #0<br />

NEM #0<br />

NEM #1<br />

CMM<br />

Figure 14. <strong>Sun</strong> <strong>Blade</strong> X6220 server module block level diagram


37 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

The nVidia PCI Express bridges are connected to the AMD Opteron processors over 8 GB<br />

per second HyperTransport links to provide maximum throughput capacity to the PCI<br />

Express lanes that are directed through the passive midplane. Two HyperTransport links<br />

connect the two CPUs, with one used for cache coherency and the other for I/O<br />

communication between the processors and the second PCI Express bridge. These links<br />

also run at 8 GB per second. Two x8 PCI Express interfaces are pulled from each of the<br />

PCI Express bridges, with each link providing a 32 Gb per second interface through the<br />

midplane. Each PCI Express bridge also provides a gigabit Ethernet interface that is<br />

routed through the passive midplane to the PCI Express Network Express Modules.<br />

<strong>Sun</strong> <strong>Blade</strong> X6220 server modules also provide a Compact Flash slot, connected to the<br />

system through an IDE connection to the nVidia chipset. By inserting a standard<br />

compact flash device, administrators can store valuable data or even install a bootable<br />

operating environment. The compact flash device is internal to the server module, and<br />

it cannot be removed unless the server module is removed from the chassis.<br />

As in the <strong>Sun</strong> <strong>Blade</strong> T6300 server module, an LSI SAS1068e controller is located on the<br />

<strong>Sun</strong> <strong>Blade</strong> X6220 server module, providing eight hard drive interfaces. Four interfaces<br />

are used for the on-board hard drives (either SAS or SATA). The other four links are<br />

routed to the midplane for future use. The storage controller is capable of RAID 0 or 1<br />

and up to two volumes are supported in RAID configurations.<br />

The Integrated Lights Out Management (ILOM) Service Processor<br />

The Integrated Lights Out Management (ILOM) service processor is fully featured and is<br />

identical in implementation to that used in other <strong>Sun</strong> modular and rackmount x64<br />

servers. As a result, the <strong>Sun</strong> <strong>Blade</strong> X6220 server module integrates easily with existing<br />

management infrastructure.<br />

Critical to effective system management, the ILOM service processor:<br />

• Implements an IPMI 2.0 compliant BMC, providing IPMI management functions to<br />

the server module's BIOS, OS and applications, and to IPMI-based management tools<br />

accessing the BMC either thru the OS interfaces, or via the ILOM Ethernet<br />

management interface, providing visibility to the environmental sensors (both on the<br />

server module, and elsewhere in the chassis)<br />

• Manages inventory and environmental controls for the server module, including<br />

CPUs, DIMMs, and EMs, and provides HTTPS/CLI/SNMP access to this data<br />

• Supplies remote textual and graphical console interfaces, as well as a remote storage<br />

(USB) interface (collectively these functions are referred to as Remote Keyboard Video<br />

Mouse and Storage (RKVMS)<br />

• Provides a means to download BIOS images and firmware


38 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

The ILOM service processor also allows the administrator to remotely manage the<br />

server, independently of the operating system running on the platform and without<br />

interfering with any system activity. To facilitate full-featured remote management, the<br />

ILOM service processor provides remote keyboard, video, mouse, and storage (RKVMS)<br />

support that is tightly integrated with the <strong>Sun</strong> <strong>Blade</strong> server modules. Together these<br />

capabilities allow the server module to be administered remotely, while accessing<br />

keyboard, mouse, video and storage devices local to the administrator (Figure 15). ILOM<br />

Remote Console support is provided on the ILOM service processor and can be<br />

downloaded and executed on the management console. Input/output of virtual devices<br />

is handled between ILOM on the <strong>Sun</strong> <strong>Blade</strong> server module and ILOM Remote Console<br />

on the Web-based client management console.<br />

.<br />

ILOM Remote Console<br />

Displays Remote Video in<br />

Application Window<br />

Video<br />

(Up to 1024x768@60Hz)<br />

Graphics Redirect Over Ethernet<br />

Local Mouse and<br />

Keyboard<br />

<strong>Sun</strong> <strong>Blade</strong> X6220<br />

Server Module<br />

Management<br />

Console<br />

ILOM Remote Console<br />

Connected to ILOM Over<br />

Management Ethernet<br />

Floppy Disk or<br />

Floppy Image<br />

Keyboard, Mouse, CDROM,<br />

and Floppy are Seen as<br />

USB Devices by BIOS and O<br />

CDROM, DVDROM<br />

or .iso Image<br />

Remote Keyboard, Mouse and Storage<br />

Emulated as USB Devices by ILOM<br />

Figure 15. Remote keyboard, video, mouse, and storage (RKVMS) support in the ILOM service processor<br />

allows full-featured remote management for <strong>Sun</strong> <strong>Blade</strong> server modules<br />

<strong>Sun</strong> <strong>Blade</strong> X6250 Server Module<br />

Broadening <strong>Sun</strong>’s x64-based modular offerings, the <strong>Sun</strong> <strong>Blade</strong> X6250 server module<br />

provides support for Dual-Core and Quad-Core Intel Xeon Processors. Intel Xeon<br />

Processor 5100 series CPUs provide the highest clock speeds in the industry in a dualcore<br />

package. Intel Xeon Processor 5300 series CPUs provide quad-core processing<br />

power. Figure 16 shows a physical view of the <strong>Sun</strong> <strong>Blade</strong> X6250 server module with key<br />

components identified.


39 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

Midplane<br />

Connector<br />

Intel Xeon<br />

Processors<br />

16 FB DIMM<br />

667 sockets<br />

Two hot-plug SAS or<br />

SATA 2.5-inch drives<br />

RAID Expansion<br />

Module<br />

Figure 16. The <strong>Sun</strong> <strong>Blade</strong> X6250 server module with key components called out<br />

Intel Xeon Processor 5100 and 5300 Series<br />

Utilizing the Intel Core microarchitecture, the Intel Xeon Processor 5100 series and 5300<br />

series provide performance for multiple application types and user environments, in a<br />

substantially reduced power envelope. The dual-core 5100 series processor provides<br />

significant performance headroom for multithreaded applications and helps boost<br />

system utilization through virtualization and application responsiveness. The quad-core<br />

5300 series processor maximizes performance and performance per Watt, providing<br />

increased density for datacenter deployments.<br />

Logical block-level diagrams for both the 5100 and 5300 series processors are provided<br />

in Figure 17. The 5100 series processor includes two processor cores, each provided with<br />

a 64K level-1 cache (32K instruction/32K data). Both cores share a 4 MB level-2 cache to<br />

increase cache-to-processor data transfers, maximize main memory to processor<br />

bandwidth, and reduce latency. The 5300 series processor provides four processor cores,<br />

with two processor cores sharing a 4 MB level-2 cache for a total of 8 MB. The<br />

processors share a high-speed front side bus (FSB).<br />

Dual-core Intel Xeon 5100 Series<br />

Quad-core Intel Xeon 5300 Series<br />

Core 1<br />

Core 2<br />

Core 1<br />

Core 2<br />

Core 3<br />

Core 4<br />

64K L1<br />

Cache<br />

64K L1<br />

Cache<br />

64K L1<br />

Cache<br />

64K L1<br />

Cache<br />

64K L1<br />

Cache<br />

64K L1<br />

Cache<br />

4 MB L2 Cache<br />

4 MB L2 Cache<br />

4 MB L2 Cache<br />

Front Side Bus<br />

Front Side Bus<br />

Figure 17. Intel Xeon Processor 5100 and 5300 series block-level diagrams


PCIe x8<br />

ESI<br />

PCI<br />

LPC<br />

PCI<br />

PCIe x4<br />

40 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Server Module Architecture<br />

The <strong>Sun</strong> <strong>Blade</strong> X6250 server module (Figure 18) uses the Intel 5000P Memory Chip Hub<br />

(MCH), which provides communication to the processors over two high-speed Front<br />

Side Buses (FSBs). The FSBs run at 1,333 MHz for the higher clock speed processors and<br />

at 1,033 MHz for the slower speed bins. The maximum bandwidth through each FSB is<br />

10.5 GB per second for an aggregate processor bandwidth of 21 GB per second.<br />

FDBIMM<br />

667 Memory<br />

5.3<br />

GB/sec<br />

5.3<br />

GB/sec<br />

10.5 GB/s<br />

10.5 GB/s<br />

<strong>Blade</strong> Module Front Panel<br />

(Via Adapter Cable)<br />

USB 2.0<br />

DB-9 Serial<br />

VGA HD-15<br />

5.3<br />

GB/sec<br />

5000<br />

MCH<br />

5.3<br />

GB/sec<br />

MUX<br />

ESB2 IO<br />

PCI Bridge<br />

Super<br />

I/O<br />

AST 2000<br />

Service<br />

Processor<br />

Fabric<br />

Expansion<br />

Module (FEM)<br />

PCIe x8 - 32Gbps<br />

SATA x4<br />

RAID<br />

Expansion<br />

Module (REM)<br />

SAS<br />

HW RAID<br />

Controller<br />

Fabric<br />

Expansion<br />

Module<br />

10/100<br />

PHY<br />

Gigal<br />

IDE<br />

SAS/SATA<br />

HDDs<br />

PCIe x8 - 32Gbps<br />

PCIe x4 or XAUI<br />

PCIe x4 or XAUI<br />

10/100Mbps<br />

Management<br />

Ethernet<br />

Gbit Ethernet<br />

Gbit Ethernet<br />

Compact<br />

Flash<br />

SAS Links<br />

Passive Midplane<br />

EM #1<br />

NEM #0<br />

NEM #1<br />

NEM #0<br />

NEM #1<br />

EM #0<br />

NEM #0<br />

NEM #1<br />

CMM<br />

Figure 18. <strong>Sun</strong> <strong>Blade</strong> X6250 server module block level diagram<br />

The MCH also provides the system with high speed memory controllers, and PCI-Express<br />

bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).<br />

The total memory bandwidth provides read speeds up to 21.3 GB per second and write<br />

speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the<br />

MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The<br />

other interface is routed to the Fabric Expansion Module (FEM) socket — available for<br />

future expansion capabilities.<br />

The Intel ESB2 PCI Express bridge provides connectivity to the other PCI Express<br />

ExpressModule and access to the dual gigabit Ethernet interfaces that are routed<br />

through the passive midplane to the NEMs. This bridge also provides the IDE<br />

connection to the compact flash device, used for boot and storage capabilities.<br />

<strong>Sun</strong> <strong>Blade</strong> X6250 RAID Expansion Module (REM)<br />

All standard <strong>Sun</strong> <strong>Blade</strong> X6250 server module configurations ship with the <strong>Sun</strong> <strong>Blade</strong><br />

X6250 RAID Expansion Module (REM). The REM provides a total of eight SAS ports,<br />

battery backed cache, and RAID 0, 1, 5, and 1+0 capabilities. Using the REM, the server<br />

module provides SAS connectivity on the internal drive slots. Four 1x SAS links are also


41 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

routed to the NEMs for future storage expansion. Build-to-order <strong>Sun</strong> <strong>Blade</strong> x6250 server<br />

modules can be ordered without the REM. While these server modules will not provide<br />

SAS support, SATA connectivity to the internal hard disk drives can be provided by the<br />

Intel ESB8210 PCI Express bridge.<br />

The Embedded LOM Service Processor<br />

Similar to the other <strong>Sun</strong> <strong>Blade</strong> 6000 server modules, the <strong>Sun</strong> <strong>Blade</strong> X6250 server<br />

module includes an embedded lights out manager (embedded LOM). This built-in,<br />

hardware-based service processor enables organizations to consolidate system<br />

management functions with remote power control and monitoring capabilities. The<br />

service processor is IPMI 2.0 compliant and enables specific capabilities including<br />

system configuration information retrieval, key hardware component monitoring,<br />

remote power control, full local and remote keyboard, video, mouse (KVM) access,<br />

remote media attachment, SNMP V1, V2c, and V3 support, and event notification and<br />

logging.<br />

Administrators simply and securely access the service processor on the the <strong>Sun</strong> <strong>Blade</strong><br />

X6250 server module using a secure shell command line, redirected console, or SSLbased<br />

Web browser interface from a remote workstation. The Desktop Management<br />

Task Force’s (DMTF) Systems Management Architecture for Server Hardware (SMASH)<br />

command line protocol is supported over both the serial interface and the secure shell<br />

network interface. A Web server and Java Webstart remote console application are<br />

embedded in the service processor. This approach minimizes the need for any specialpurpose<br />

software installation on the administrative workstation to take advantage of<br />

Web-based access. For enhanced security, the service processor includes multilevel role<br />

based access to features. The service processor flexibly supports native and Active<br />

Directory Service lookup of authentication data. All functions can be provided out-ofband<br />

through a designated serial or network interface, eliminating the performance<br />

impact to workload processing.<br />

<strong>Sun</strong> <strong>Blade</strong> X6450 Server Module<br />

Adding to the capabilities of the <strong>Sun</strong> <strong>Blade</strong> X6250 server module, the <strong>Sun</strong> <strong>Blade</strong> X6450<br />

server module provides increased scalability of dual-core and quad-core Intel Xeon<br />

processors. Dual-core Intel Xeon Processor 7200 series and and quad-core Intel Xeon<br />

Processor 7300 series provide support for quad-socket configurations, such as those<br />

offered by the <strong>Sun</strong> <strong>Blade</strong> X6450 server module. Offering both quad-core and quadsocket<br />

support in a blade package provides significant computational density while<br />

offering the flexible advantages of a modular platform. Figure 19 illustrates a physical<br />

view of the <strong>Sun</strong> <strong>Blade</strong> X6450 server module with key components identified.


42 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Midplane<br />

Connector<br />

Intel Xeon<br />

Processors<br />

24 FB-DIMM<br />

667 sockets<br />

Compact Flash<br />

Storage<br />

Intel 7000 MCH<br />

(Clarksboro Northbridge)<br />

Figure 19. The <strong>Sun</strong> <strong>Blade</strong> X6450 server module supports up to four Intel Xeon processors<br />

Intel Xeon Processor 7200 and 7300 Series<br />

The Intel Xeon Processor 7200 Series and 7300 Series processors use a Multi-Chip<br />

Package (MCP) to deliver quad-core configurations. This packaging approach increases<br />

die yields and lowers manufacturing costs, which helps Intel and <strong>Sun</strong> to deliver higher<br />

performance at lower price points. The dual-core Intel Xeon Processor 7200 Series and<br />

quad-core Intel Xeon Processor 7300 Series both incorporate two die per processor<br />

package, with each die capable of containing two processor cores (Figure 20).<br />

Figure 20. Intel Xeon Processor 7200 and 7300 series block-level diagrams<br />

In the dual-core Intel Xeon 7200 Series, each die includes one processor core, but in the<br />

quad-core Intel Xeon Processor 7300 Series, each die contains two cores. In a <strong>Sun</strong> <strong>Blade</strong><br />

X6450 server server module with four processors, this dense configuration provides up<br />

to 16 execution cores in a compact blade form factor. The 7000 Sequence processor<br />

families share these additional features:<br />

• An on-die Level 1 (L1) instruction data cache (64KB per die)<br />

• An on-die Level 2 (L2) cache (4MB per die for a total of 8MB in packages with two die)<br />

• Multiple, independent Front Side Buses (FSBs) that act as high-bandwidth system<br />

interconnects


PCIe x8<br />

ESI<br />

PCI<br />

LPC<br />

PCI<br />

PCIe x4<br />

43 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

Server Module Architecture<br />

The <strong>Sun</strong> <strong>Blade</strong> X6450 server module (Figure 21) uses the Intel 7000 Memory Chip Hub<br />

(MCH) — also known as the Clarksboro Northbridge — which provides communication<br />

to the processors over four high-speed Front Side Buses (FSBs). The FSBs run at 256 MHz<br />

or 1033 MT/s. The maximum bandwidth through each FSB is 8.5 GB per second for an<br />

aggregate processor bandwidth of 34 GB per second.<br />

8.5 GB/s<br />

FD-BIMM<br />

667 Memory<br />

5.3<br />

GB/sec<br />

7000<br />

MCH<br />

5.3<br />

GB/sec<br />

PCIe x8 - 32Gbps<br />

Fabric<br />

Expansion<br />

Module<br />

PCIe x8 - 32Gbps<br />

PCIe x4 or XAUI<br />

PCIe x4 or XAUI<br />

EM #0<br />

NEM #0<br />

NEM #1<br />

8.5 GB/s<br />

<strong>Blade</strong> Module Front Panel<br />

(Via Adapter Cable)<br />

USB 2.0<br />

5.3<br />

GB/sec<br />

5.3<br />

GB/sec<br />

ESB2 IO<br />

PCI Bridge<br />

Super<br />

I/O<br />

SAS<br />

HW RAID<br />

Controller<br />

Optional<br />

PCIe x8 - 32Gbps<br />

PCIe x8 - 32Gbps<br />

IDE<br />

Gigal<br />

SAS Links<br />

Gbit Ethernet<br />

Gbit Ethernet<br />

Compact<br />

Flash<br />

Passive Midplane<br />

NEM #0<br />

NEM #1<br />

NEM #1<br />

NEM #0<br />

EM #1<br />

DB-9 Serial<br />

MUX<br />

VGA HD-15<br />

AST 2000<br />

Service<br />

Processor<br />

10/100<br />

PHY<br />

10/100Mbps<br />

Management<br />

Ethernet<br />

CMM<br />

Figure 21. <strong>Sun</strong> <strong>Blade</strong> X6450 server module block level diagram<br />

The MCH also provides the system with high-speed memory controllers, and PCI Express<br />

bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).<br />

The total memory bandwidth provides read speeds up to 21.3 GB per second and write<br />

speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the<br />

MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The<br />

other interface is routed to the Fabric Expansion Module (FEM) socket — available for<br />

future expansion capabilities. An x4 PCI Express connection powers an optional RAID<br />

Expansion Module (REM) that can be configured to access Serial Attached SCSI (SAS)<br />

storage devices over the passive midplane.<br />

The Intel ESB2 I/O PCI Express bridge provides connectivity to the other PCI Express<br />

ExpressModule and access to the dual gigabit Ethernet interfaces that are routed<br />

through the passive midplane to the NEMs. This bridge also provides the IDE<br />

connection to the compact flash device. The <strong>Sun</strong> <strong>Blade</strong> X6450 server module is diskless,<br />

and contains no traditional hard drives. The integrated CompactFlash device provides a<br />

means for internal storage that can be used as a boot device or as a generic storage<br />

medium.


44 Server Module Architecture <strong>Sun</strong> Microsystems, Inc.<br />

The Embedded LOM Service Processor<br />

Like the <strong>Sun</strong> <strong>Blade</strong> X6250 server module, the <strong>Sun</strong> <strong>Blade</strong> X6450 server module includes<br />

an embedded lights out manager (embedded LOM). This built-in, hardware-based<br />

service processor enables organizations to consolidate system management functions<br />

with remote power control and monitoring capabilities. The service processor is IPMI<br />

2.0 compliant and enables specific capabilities including system configuration<br />

information retrieval, key hardware component monitoring, remote power control, full<br />

local and remote keyboard, video, mouse (KVM) access, remote media attachment,<br />

SNMP V1, V2c, and V3 support, and event notification and logging.<br />

Administrators simply and securely access the service processor on the the <strong>Sun</strong> <strong>Blade</strong><br />

X6250 server module using a secure shell command line, redirected console, or SSLbased<br />

Web browser interface from a remote workstation. The Desktop Management<br />

Task Force’s (DMTF) Systems Management Architecture for Server Hardware (SMASH)<br />

command line protocol is supported over both the serial interface and the secure shell<br />

network interface. A Web server and Java Webstart remote console application are<br />

embedded in the service processor. This approach minimizes the need for any specialpurpose<br />

software installation on the administrative workstation to take advantage of<br />

Web-based access. For enhanced security, the service processor includes multilevel role<br />

based access to features. The service processor flexibly supports native and Active<br />

Directory Service lookup of authentication data. All functions can be provided out-ofband<br />

through a designated serial or network interface, eliminating the performance<br />

impact to workload processing.


45 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

Chapter 4<br />

I/O Expansion, Networking, and Management<br />

Today’s datacenter investments need to be protected, especially as systems are repurposed,<br />

expanded, and altered to meet dynamic demands. Modular systems can play<br />

a key role, allowing organizations to derive maximum benefit from their infrastructure,<br />

even as their needs change. More importantly, modular systems must avoid arbitrary<br />

limitations that restrict choice in I/O, networking, or management. The <strong>Sun</strong> <strong>Blade</strong> 6000<br />

and <strong>6048</strong> modular systems in particular are designed to work with open and<br />

multivendor industry standards without dictating components, topologies, or<br />

management scenarios.<br />

Server Module Hard Drives<br />

A choice of hot swappable 2.5-inch SAS or SATA hard disk drives is provided with all <strong>Sun</strong><br />

<strong>Blade</strong> 6000 server modules except for the <strong>Sun</strong> <strong>Blade</strong> X6450 server module.<br />

• Serial Attached SCSI (SAS) drives provide high performance and high density. Drives<br />

are 10,000 rpm and available in capacities of 73 GB or 146 GB. These drives provide<br />

enterprise-class reliability with 1.6 million hours mean time between failures (MTBF).<br />

• Serial ATA (SATA) drives are 5400 rpm and available in 80 GB capacities.<br />

Please check <strong>Sun</strong>’s Website (www.sun.com/servers/blades/6000) for the latest<br />

available disk drive offerings.<br />

PCI Express ExpressModules (EMs)<br />

Industry-standard I/O, long a staple of rackmount and vertically-scalable servers has<br />

been elusive in legacy blade platforms. Unfortunately the lack of industry-standard I/O<br />

has meant that customers often paid more for fewer options, and were ultimately<br />

limited by a single vendor’s innovation. Unlike legacy blade platforms, <strong>Sun</strong> Fire 6000<br />

and <strong>6048</strong> modular systems accommodate PCI Express ExpressModules (EMs) compliant<br />

with PCI SIG form factor. This approach allows for a wealth of expansion module options<br />

from multiple expansion module vendors, and avoids a single-vendor lock on<br />

innovation. The same EMs can be used on both <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular<br />

systems as well as <strong>Sun</strong> <strong>Blade</strong> 8000 modular systems.<br />

The passive midplane implements connectivity between the EMs and the server<br />

modules, and physically assigns pairs of EMs to individual server modules. As shown in<br />

Figure 22, EMs 0 and 1 (from right to left) are connected to server module 0, EMs 2 and<br />

3 are connected to server module 1, EMs 4 and 5 are connected to server module 3, and<br />

so on. Each EM is supplied with an x8 PCI Express link back to its associated server<br />

module, providing up to 32 Gb/s of I/O throughput. EMs are hot-plug capable according<br />

to the standard defined by the PCI SIG, and fully customer replaceable without opening<br />

either the chassis or removing the server module.


46 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

Server Module 9<br />

Server Module 8<br />

Server Module 7<br />

Server Module 6<br />

Server Module 5<br />

Server Module 4<br />

Server Module 3<br />

Server Module 2<br />

Server Module 1<br />

Server Module 0<br />

Figure 22. A pair of 8-lane (x8) PCI Express slots allow up to two PCI Express ExpressModules per server<br />

module in the <strong>Sun</strong> <strong>Blade</strong> 6000 (shown) and <strong>6048</strong> chassis<br />

With the industry-standard PCI Express ExpressModule form factor, EMs are available<br />

for multiple types of connectivity, including<br />

• 4 Gb FiberChannel, dual port (Qlogic, SG-XPCIE2FC-QB4-Z)*<br />

• 4 Gb FiberChannel, dual port (Emulex, SG-XPCIE2FC-EB4-Z)<br />

• Gb Ethernet, dual-port (copper, X7282A-Z)*<br />

• Gb Ethernet, dual-port (fiber, X7283A-Z)<br />

• 4X InfiniBand, dual-port (Mellanox, X1288A-Z)*<br />

• 12 Gb SAS, dual-port (LSI Logic, SG-XPCIE8SAS-EB-Z)<br />

• 12 Gb SAS RAID, single-port (Intel SRL, SGXPCIESAS-R-BLD-Z)<br />

• Gb Ethernet, quad-port (copper, X7284A-Z)<br />

• Gb Ethernet, quad-port (copper, X7287A-Z)<br />

• 10 Gb Ethernet, dual-port (fiber, X1028A-Z)<br />

• 4x Infiniband, no-mem, single-port (Mellanox, X1290A)<br />

EMs marked with an asterisk are shown in Figure 23. For the latest available EMs,<br />

please refer to www.sun.com/servers/blades/6000.<br />

Figure 23. Several PCI Express ExpressModules available for the <strong>Sun</strong> <strong>Blade</strong> 6000 modular server.


47 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

PCI Express Network Express Modules (NEMs)<br />

Many legacy blade platforms include integrated network switching as a way to gain<br />

aggregate network access to the individual server modules. Unfortunately, these<br />

switches are often restrictive in their options and may dictate topology and<br />

management choices. As a result, datacenters often find legacy blade server platforms<br />

difficult to integrate into their existing networks, or are resistant to admitting new<br />

switch hardware into their chosen network fabrics.<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems address this problem through a specific PCI<br />

Express Network Express Module (NEM) form factor that provides configurable network<br />

I/O for all of the server modules in the system. Connecting to all of the installed server<br />

modules through the passive midplane, NEMs represent a space-efficient mechanism<br />

for deploying high-density configurable I/O, and provide bulk or I/O options for the<br />

entire chassis.<br />

Gigabit Ethernet Pass-Through NEMs<br />

Gigabit Ethernet Pass-Through NEMs are available for configuration with both the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 and <strong>6048</strong> modular systems, providing pass-through access to the gigabit<br />

Ethernet interfaces located on the server modules. Separate NEMs are provided to<br />

support the different numbers of server modules in the two chassis. Gigabit Ethernet<br />

interface logic resides on the server module while the passive midplane simply provides<br />

access and connectivity. With the Gigabit Ethernet Pass-Through NEMs, individual<br />

servers can be connected to external switches just as easily as rackmount servers —<br />

with no arbitrary topological constraints.<br />

The Gigabit Ethernet Pass-Through NEMs provide an RJ-45 connector for each of the<br />

server modules supported in the respective chassis — 10 for the <strong>Sun</strong> <strong>Blade</strong> 6000<br />

modular system, and 12 for the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system shelf. Adding a second<br />

pass-through NEM provides access to the second gigabit Ethernet connection on each<br />

server module. Figure 24 illustrates the Gigabit Ethernet Pass-Through NEM.<br />

Figure 24. The Gigabit Ethernet Pass-Through NEM provides a 10/10/1000 BaseT port for each installed<br />

<strong>Sun</strong> <strong>Blade</strong> server module (<strong>Sun</strong> <strong>Blade</strong> 6000 Pass-Through NEM shown)


48 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> InfiniBand Switched NEM<br />

Providing dense connectivity to servers while minimizing cables is one of the issues<br />

facing large HPC cluster deployments. The <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> InfiniBand Switched NEM<br />

solves this challenge by integrating an InfiniBand leaf switch into a Network Express<br />

Module for the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> chassis. The NEM shares components, cables, and<br />

connectors with the <strong>Sun</strong> Datacenter Switch (DS) 3456 and 3x24, facilitating build-out of<br />

very large InfiniBand clusters (up to 288 nodes per <strong>Sun</strong> DS 3x24, and up to 3,456 nodes<br />

per <strong>Sun</strong> DS 3456. Up to four <strong>Sun</strong> DS 3456 core switches can be employed to construct<br />

truly massive clusters with up to 13,824 <strong>Sun</strong> <strong>Blade</strong> 6000 server modules. A block-level<br />

diagram of the dual-height NEM is provided in Figure 25, aligned with an image of the<br />

back panel.<br />

12 PCI Express x8 Connections from Server Modules<br />

InfiniBand Leaf Switch<br />

NEM Components<br />

HCA HCA HCA HCA HCA HCA HCA HCA HCA HCA HCA HCA<br />

24 Port 384 Gbps IB Switch 24 Port 384 Gbps IB Switch<br />

External NEM Profile<br />

Gigabit Ethernet Connections to Each Server Module<br />

Figure 25. The <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> InfiniBand Switched NEM provides eight switched 12x InfiniBand<br />

connections to the two on-board 24-port switches, and twelve pass-through gigabit Ethernet ports, one<br />

to each <strong>Sun</strong> <strong>Blade</strong> 6000 server module in the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> shelf<br />

Each <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> InfiniBand Switched NEM employs two of the same Mellanox<br />

InfiniScale III 24-port switch chips used in <strong>Sun</strong> DS 3456 and 3x24 InfiniBand switches,<br />

providing 12 internal and 12 external connections. Redundant internal connections are<br />

provided from Mellanox ConnectX HCA chips to each of the switch chips, allowing the<br />

system to route around failed links. Additionally, 12 pass-through gigabit Ethernet<br />

connections are provided to access gigabit Interfaces provided on individual <strong>Sun</strong> <strong>Blade</strong><br />

6000 server modules mounted in the <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system. The same<br />

standard Compact Small Form-factor Pluggable (CSFP) connectors are used on the back<br />

panel for direct connection to the <strong>Sun</strong> DS 3456 or 3x24 switch, with each 12x<br />

connection providing four 4x InfiniBand connections.


49 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

Transparent and Open <strong>Chassis</strong> and System Management<br />

Management in legacy blade platforms has typically either been lacking, or<br />

administrators have been forced into adopting unique and platform-specific<br />

management infrastructure. To address this issue, the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong><br />

modular systems provide a wide range of flexible management options.<br />

<strong>Chassis</strong> Monitoring Module (CMM)<br />

The <strong>Chassis</strong> Monitoring Module (CMM) is the primary point of management of all<br />

shared chassis components and functions, providing a set of management interfaces.<br />

Each server module contains its own service processor, giving it similar remote<br />

management capabilities to other <strong>Sun</strong> servers. Through their respective Lights Out<br />

Management service processors, individual server modules provide IPMI, HTTPs, CLI<br />

(SSH), SNMP, and file transfer interfaces that are directly accessible from the Ethernet<br />

management port on the <strong>Chassis</strong> Monitoring Module (CMM). Each server module is<br />

assigned an IP address (either manually, or via DHCP) that is used for the management<br />

network.<br />

CMM Network Functionality<br />

A single CMM is built into each <strong>Sun</strong> <strong>Blade</strong> 6000 modular system and <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong><br />

shelf, and is configured with an individual IP address assigned either statically or<br />

dynamically via DHCP. The CMM provides complete monitoring and management<br />

functionality for the chassis (or shelf) while providing access to server module<br />

management functions. In addition, the CMM supports HTTP and CLI “pass-thru”<br />

interfaces that provide transparent access to each server module. The CMM also<br />

provides access to each server module via a single serial port through which any of the<br />

various LOM interfaces can be configured. The CMM's management functions include:<br />

• Implementation of an IPMI satellite controller, making the chassis environmental<br />

sensors visible to the server module’s BMC functions<br />

• Direct environmental and inventory management via CLI and IPMI interfaces<br />

• CMM, ILOM, and NEM firmware management<br />

• Pass-through management of blades using IPMI, SNMP, and HTTP links along with<br />

command line interface (CLI) SSH contexts<br />

The management network internal to the CMM joins the local management processor<br />

on each server module to the external management network through the passive<br />

midplane.<br />

CMM Architecture<br />

A portion of the CMM functions as an unmanaged switch dedicated exclusively to<br />

remote management network traffic, letting administrators access the remote<br />

management functions of the server modules. The switch in the CMM provides a single<br />

network interface to each of the server modules and to each of the NEMs, as well as to


50 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

the service processor located on the CMM itself. Figure 26 provides an illustration and a<br />

block-level diagram of the <strong>Sun</strong> <strong>Blade</strong> 6000 CMM. The <strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> NEM has a<br />

different form factor but provides the same functionality.<br />

To CMM Service Processor<br />

Server Module 0<br />

Server Module 1<br />

Server Module 2<br />

Server Module 3<br />

Server Module 4<br />

Server Module 5<br />

Server Module 6<br />

Server Module 7<br />

Server Module 8<br />

Server Module 9<br />

NEM 0<br />

NEM 1<br />

Unmanaged<br />

Switch<br />

Gigabit Ethernet Uplink 1<br />

Gigabit Ethernet Uplink 0<br />

Figure 26. The CMM provides a management network that connects to each server module, the two<br />

NEMS, and the CMM itself (<strong>Sun</strong> <strong>Blade</strong> 6000 CMM shown)<br />

The CMM’s functionality provides various management functions, including power<br />

control of the chassis as well as hot-plug operations of infrastructure components such<br />

as power supply modules, fan modules, server modules, and NEMs. The CMM acts as a<br />

conduit to server module LOM configuration, allowing settings such as network<br />

addresses and administrative users to be configured or viewed.<br />

<strong>Sun</strong> xVM Ops Center<br />

Beyond local and remote management capabilities, datacenter infrastructure needs to<br />

be agile and flexible, allowing not only fast deployment but streamlined redeployment<br />

of resources as required. <strong>Sun</strong> xVM Ops Center technology (formerly <strong>Sun</strong> N1 System<br />

Manager and <strong>Sun</strong> Connection) provides an IT infrastructure management platform for<br />

integrating and automating management of thousands of heterogeneous systems. To<br />

improve life-cycle and change management, <strong>Sun</strong> xVM Ops Center supports the<br />

management of applications and the servers on which they run, including the <strong>Sun</strong><br />

<strong>Blade</strong> 6000 and <strong>6048</strong> modular systems.<br />

<strong>Sun</strong> xVM Ops Center simplifies infrastructure life-cycle management by letting<br />

administrators perform standardized actions across logical groups of systems. <strong>Sun</strong> xVM<br />

Ops Center can automatically discover and group bare-metal systems, performing<br />

actions on the entire group as easily as operating on a single system. <strong>Sun</strong> xVM Ops<br />

Center remotely installs and updates firmware and operating systems, including<br />

support for:<br />

• Solaris 8, 9, and 10 on SPARC systems<br />

• Solaris 10 on x86/x64 platforms<br />

• Red Hat and SuSE distributions


51 I/O Expansion, Networking, and Management <strong>Sun</strong> Microsystems, Inc.<br />

In addition, the software provides considerable lights-out monitoring of both hardware<br />

and software, including fans, temperature, disk and voltage levels — as well as swap<br />

space, CPU utilization, memory capacity, and file systems. Role-based access control<br />

lets IT staff grant specific management permissions to specific users. A convenient<br />

hybrid user interface integrates both a command-line interface (CLI) and an easy-to-use<br />

graphical user interface (GUI), providing remote access to manage systems from<br />

virtually anywhere.<br />

<strong>Sun</strong> xVM Ops Center provides advanced management and monitoring features to the<br />

<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems. The remote management interface<br />

discovers and presents the <strong>Sun</strong> <strong>Blade</strong> server modules in the chassis as if they were<br />

individual servers. In this fashion, the server modules appear in exactly the same way<br />

as individual rackmount servers, making the same operations, detailed inventory, and<br />

status pages available to administrators. The server modules are discovered and<br />

organized into logical groups for easy identification of individual modules, and the<br />

system chassis and racks that contain them. Organizing servers into groups also allows<br />

features such as OS deployment across multiple server modules. At the same time,<br />

individual server modules can also be managed independently from the rest of the<br />

chassis. This flexibility allows for management of server modules that may have<br />

different requirements than the other modules deployed in the same chassis.<br />

Some of the functions available through <strong>Sun</strong> xVM Ops Center software include<br />

operating system provisioning, firmware updates (for both the BIOS and ILOM service<br />

processor firmware), and health monitoring. In addition, <strong>Sun</strong> xVM Ops Center includes<br />

a framework allowing administrators to easily access inventory information, simplify<br />

the task of running jobs on multiple servers with server grouping functionality.


52 Conclusion <strong>Sun</strong> Microsystems, Inc.<br />

Chapter 5<br />

Conclusion<br />

<strong>Sun</strong>'s innovative technology and open-systems approach make modular systems<br />

attractive across a broad set of applications and activities— from deploying dynamic<br />

Web services infrastructure to building datacenters run demanding HPC codes. The <strong>Sun</strong><br />

<strong>Blade</strong> 6000 modular system provide the promised advantages of modular architecture<br />

while retaining essential flexibility for how technology is deployed and managed. The<br />

<strong>Sun</strong> <strong>Blade</strong> <strong>6048</strong> modular system extends and amplifies these strengths, allowing<br />

organizations to build ultra-dense infrastructure that can scale to provide the world’s<br />

largest terascale and petascale supercomputing clusters and grids.<br />

<strong>Sun</strong>’s standard and open-systems based approach yields choice and avoids compromise<br />

— providing a platform that benefits from widespread industry innovation. With<br />

chassis designed for investment protection into the future, organizations can literally<br />

cable once, and change their deployment options as required — mixing and matching<br />

server modules as desired. A choice of <strong>Sun</strong> SPARC, Intel Xeon, or AMD Opteron based<br />

server modules and a choice of operating systems makes it easy to choose the right<br />

platform for essential applications. Industry-standard I/O provides leading flexibility<br />

and leading throughput for individual servers. Transparent networking and<br />

management means that the <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems fit easily into<br />

existing network and management infrastructure.<br />

The <strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> modular systems get blade architecture right. Together<br />

with the <strong>Sun</strong> <strong>Blade</strong> 8000 and 8000 P modular systems, <strong>Sun</strong> now has one of the most<br />

comprehensive modular system families in the industry. This breadth of coverage<br />

translates directly to savings in terms of administration and management. For example,<br />

unified support for the Solaris OS across all server modules means that the same<br />

features and functionality are available on all processor platforms. This approach saves<br />

both time in training and administration — even as the system delivers agile<br />

infrastructure for the organization’s most critical applications.


53 Conclusion <strong>Sun</strong> Microsystems, Inc.


<strong>Sun</strong> <strong>Blade</strong> 6000 and <strong>6048</strong> Modular Systems<br />

On the Web sun.com/servers/blades/6000<br />

<strong>Sun</strong> Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com<br />

© 2007-2008 <strong>Sun</strong> Microsystems, Inc. All rights reserved. <strong>Sun</strong>, <strong>Sun</strong> Microsystems, the <strong>Sun</strong> logo, CoolThreads, Java, JVM, Solaris, <strong>Sun</strong> <strong>Blade</strong>, <strong>Sun</strong> Fire, N1 and ZFS are trademarks or registered trademarks of <strong>Sun</strong><br />

Microsystems, Inc. and its subsidiaries in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and<br />

other countries. Products bearing SPARC trademarks are based upon an architecture developed by <strong>Sun</strong> Microsystems, Inc. Intel Xeon is a trademark or registered trademark of Intel Corporation or its subsidiaries in<br />

the United States and other countries. AMD Opteron is a trademark or registered trademarks of Advanced Micro Devices. Information subject to change without notice. Printed in USA <strong>Sun</strong>WIN #:494863 06/08

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!