05.01.2013 Views

supernode technical specifications - Bull

supernode technical specifications - Bull

supernode technical specifications - Bull

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

ullx <strong>supernode</strong><br />

Building on the success of its bullx blade system and on its acknowledged experience in designing sMP<br />

systems, <strong>Bull</strong> is now extending its bullx range of Extreme Computing nodes. With the bullx super¬node<br />

<strong>Bull</strong> is taking Extreme Computing further by introducing a new generation of sMP system that can satisfy<br />

the needs of even the most memory-hungry applications. Whether you need an efficient SMP node within<br />

your High-Performance Computing (HPC) cluster, or you are looking for versa¬ti¬le large nodes to simplify<br />

the architecture of a large-scale cluster, the bullx <strong>supernode</strong> will deliver the ultra-high performance your<br />

applications need. With the bullx <strong>supernode</strong> family, <strong>Bull</strong> offers you made-to-measure solutions based on<br />

industry standards so you can go even further, demand more from your systems, innovate even faster, and<br />

turn the corner into the future at the head of the pack.<br />

The bullx <strong>supernode</strong> family was designed by <strong>Bull</strong>’s Extreme<br />

Computing experts with the following guidelines in mind:<br />

• Optimization, compactness and simplification of the<br />

compute node for HPC usage<br />

• Optimization and enhanced connectivity of the service nodes<br />

• Versatility, to support a wide range of applications,<br />

including memory-hungry code<br />

• Expandability: up to 4 x 4-socket sMP nodes can be<br />

interconnected through a <strong>Bull</strong>-designed switch, to form a<br />

large sMP node with up to 16 sockets and 4TB of memory<br />

Leading-edge technologies<br />

The resulting bullx <strong>supernode</strong>s capitalize on the latest<br />

technological advances such as:<br />

• new-generation intel ® Xeon ® processors E7-4800<br />

family (codename: Westmere-EX), designed for highend<br />

servers, with advanced reliability and exceptional<br />

scalability<br />

• intel Quick Path interconnect protocol<br />

• Optimized intra-node throughput and latency thanks to<br />

the bullx MPi<br />

• InfiniBand QDR network connection<br />

• support for GPU systems<br />

Thanks to these innovations, the bullx<br />

<strong>supernode</strong> delivers:<br />

• improved performance (enhanced compute node<br />

efficiency, reduced latency, higher communication and<br />

i/O throughput)<br />

• Lower cost of ownership (fewer components, lower<br />

management cost, reduced installation time, power efficiency)<br />

PRODUCT sPECifiCaTiOns<br />

• Enhanced reliability (‘fat‘ nodes mean fewer nodes)<br />

High-density compute nodes<br />

The exclusive format of bullx s6010 nodes – 1.5 U<br />

L-shaped drawers that fit together in pairs to form a 3U<br />

drawer – delivers outstanding density for a high-end HPC<br />

node. They are the ideal foundation to build large-scale<br />

HPC clusters while minimizing the number of nodes, and<br />

thus simplifying the infrastructure and operation of the<br />

whole system.<br />

Service nodes: enhanced I/O and storage<br />

capabilities within 3U<br />

The bullx S6030 nodes offer advanced connectivity<br />

features, an enhanced power supply, extended storage<br />

options and redundancy features. so they are ideally suited<br />

to act as management or i/O nodes. They can also be<br />

coupled to a GPU system.<br />

Power-conscious design<br />

• Processor power management features<br />

• sleep mode<br />

• Ultra-capacitor protecting the node from power outages<br />

up to 300ms. In areas with good quality power supplies,<br />

the ultra capacitor means a UPs is no longer needed –<br />

resulting in savings of to 15% on power consumption!<br />

• High efficiency power supply unit (+90%)<br />

www.bull.com/extremecomputing


TBWA\CORPORATE - S-bullxSuper-en4<br />

Form factor<br />

Processors<br />

<strong>supernode</strong> <strong>technical</strong> <strong>specifications</strong><br />

bullx S6010 compute node bullx S6030 service node<br />

Rack mount 1.5U L-shaped drawer<br />

2 drawers are fitted together to make up<br />

a 3U drawer<br />

Rack mount 3U drawer<br />

Up to 4 hexa/octo/ten core intel® Xeon® E7-4800 family (Westmere EX) at up to 2.4 GHz or octo core intel®<br />

Xeon® E7-8800 Family (Westmere EX optimized freq.) at 2.66 GHz (up to 30MB shared cache) OR<br />

Up to 4 quad/hexa/octo core Intel® Xeon® 7500 series (Nehalem EX) at up to 2.26 GHz (24MB shared cache)<br />

Architecture 1 x intel ® 7500 chipset (Boxboro iOH)<br />

with QPi up to 6.4GT/s<br />

Memory<br />

Drawer interconnect<br />

switch<br />

Expansion slots<br />

Compatible<br />

accelerators and<br />

graphics cards<br />

Storage<br />

Interconnect<br />

Ethernet<br />

I/O ports<br />

1 PCi-Express 2.0 x16<br />

LP slot<br />

2 x intel ® 7500 chipsets (Boxboro iOH)<br />

with QPi up to 6.4GT/s<br />

32 slots for DDR3 SDRAM RDIMMs 1066Mhz DR or QR<br />

Up to 512GB with 16 GB DiMMs<br />

Ready for 1024GB with 32 GB DIMMs when available<br />

supported, to make up a large 8 to 16-socket sMP node (option, available later)<br />

2 PCi-Express 2.0 x16<br />

4 PCi-Express 2.0 X8<br />

all LP slots<br />

nViDia® Tesla s2050 / next iO vCore Express 2070 1U Computing system<br />

Max. internal storage capacity: 750 GB<br />

1 saTa2 7.2 krpm 2.5 inch disk (250/500/750 GB)<br />

Embedded single port ConnectX-2 InfiniBand<br />

QDR controller (QsfP connector)<br />

2 embedded 1Gb/s Ethernet interfaces<br />

(intel Kawela dual port controller)<br />

1 embedded 10/100 Mb/s Ethernet interface<br />

for server management<br />

2 external UsB ports<br />

2 1Gb/s external Ethernet ports (RJ45)<br />

1 10/100 Mb/s Ethernet port (RJ45) for server<br />

management<br />

1 external VGa port<br />

Max. internal storage capacity: 4800 GB with 8 x 600<br />

sas disks or 8000 GB with 8 x 1000 saTa disk (with Raid<br />

card option)<br />

Up to 6 saTa2 7.2 krpm disks (160/500/1000 GB)<br />

OR<br />

Up to 8 SAS disks (146/300/450/600 GB at 10 krpm or<br />

146 GB at 15 krpm) with PCi-Express sas RaiD controller<br />

board<br />

OR<br />

Up to 8 saTa nanD flash ssD (128/256 GB) with PCi-<br />

Express sas/saTa RaiD controller board<br />

all disks are 2.5 inches and hot-swappable<br />

support of RaiD 0, 1, 5, 6, 10, 50, 60 (with PCi-Express<br />

sas/saTa RaiD board)<br />

CD/DVD reader/writer (option)<br />

PCI-E single or dual port ConnectX-2 InfiniBand QDR<br />

controller card (option)<br />

4 embedded 1Gb/s Ethernet interfaces (2 x intel Kawela<br />

dual port controllers<br />

1 embedded 10/100Mb/s Ethernet interface for server<br />

management<br />

PCi-E 10Gb/s Ethernet controller (single/dual port) (option)<br />

1 internal UsB port<br />

2 external UsB ports<br />

4 1Gb/s external Ethernet ports (RJ45)<br />

1 10/100 Mb/s Ethernet port (RJ45) for server management<br />

1 external VGa port<br />

Security anti-intrusion switch and Trusted Platform Module chip<br />

Management<br />

Power Supply<br />

Sleep state 4<br />

(Standby to disk)<br />

1 x 1500 Watts PsU<br />

auto-sensing: 110/220 V, 60/50 Hz<br />

supported<br />

integrated Baseboard Management Controller (BMC)<br />

Local Control Panel display on drawer front<br />

2 redundant, hot-swap 1600 Watts PSUs, efficiency up to 91%<br />

auto-sensing: 110/220 V, 60/50 Hz<br />

supported<br />

Ultracapacitor Ultracapacitor module to offset power outages up to 300 ms (optional)<br />

Ventilation 4 pairs of 17500 TPM counter rotating cooling fans 4 pairs of 17500 TPM hot-swap counter rotating cooling fans<br />

Physical spec. 44/88 x 440 x 750 mm (HxWxD) - 30 kg 130 x 440 x 750 mm (HxWxD) - 45 kg<br />

OS & cluster<br />

software<br />

Red Hat Enterprise Linux or suse sLEs & bullx supercomputer suite OR<br />

Microsoft ® Windows ® server 2008 R2 & Microsoft ® Windows ® HPC server 2008 R2<br />

Safety CE, UL/Csa, fCC, RoHs<br />

Warranty 1-year, optional warranty extension<br />

© <strong>Bull</strong> sas – 2011 - <strong>Bull</strong> acknowledges the rights of proprietors of trademarks mentioned herein. <strong>Bull</strong> reserves the right to modify this document at any time without notice.<br />

some offers or parts of offers described in this document may not be available in your country.<br />

Please consult your local <strong>Bull</strong> correspondent for information regarding the offers which may be available in your country.<br />

<strong>Bull</strong> – Rue Jean Jaurès - 78340 Les Clayes sous Bois – France

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!