28.12.2014 Views

Rocks+ Mellanox OFED Users Guide - neams

Rocks+ Mellanox OFED Users Guide - neams

Rocks+ Mellanox OFED Users Guide - neams

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> <strong>Users</strong> <strong>Guide</strong><br />

5.4 Edition


<strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> <strong>Users</strong> <strong>Guide</strong>:<br />

5.4 Edition<br />

Published May 04 2011<br />

Copyright © 2011 Clustercorp, Inc.


Table of Contents<br />

Preface............................................................................................................................................................................v<br />

1. Overview ....................................................................................................................................................................1<br />

2. Attributes ...................................................................................................................................................................2<br />

3. Installing <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> .........................................................................................................................3<br />

3.1. Installation on a New System.........................................................................................................................3<br />

3.2. Adding to an Existing Frontend .....................................................................................................................3<br />

4. Configuring InfiniBand ........................................................................................................................................5<br />

4.1. Setting up InfiniBand IPoIB.......................................................................................................................5<br />

4.2. Setting up the InfiniBand Subnet Manager ................................................................................................6<br />

5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong>................................................................................................................................7<br />

5.1. Examining the state of your fabric .................................................................................................................7<br />

5.2. Debugging ......................................................................................................................................................8<br />

5.3. Using MPI-Selector........................................................................................................................................8<br />

5.4. Running IMB (Intel MPI Benchmark) to validate your fabric.......................................................................9<br />

5.5. Firmware Flash.............................................................................................................................................11<br />

6. Care and Feeding ....................................................................................................................................................14<br />

6.1. Updating your kernel....................................................................................................................................14<br />

6.2. Configuring new appliances with <strong>OFED</strong>......................................................................................................14<br />

A. Copyright ................................................................................................................................................................16<br />

A.1. <strong>Rocks+</strong>.....................................................................................................................................................16<br />

A.2. Rocks®........................................................................................................................................................16<br />

iii


List of Tables<br />

1-1. Summary..................................................................................................................................................................1<br />

1-2. Roll Compatibility ...................................................................................................................................................1<br />

1-3. Platform Compatibility............................................................................................................................................1<br />

2-1. Roll Attributes .........................................................................................................................................................2<br />

iv


Preface<br />

The <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll installs and configures the driver stack and MPI layer as provided in the<br />

MLNX_<strong>OFED</strong>_LINUX-1.5.2-2.1.0-rhel5.5.iso iso (md5sum e7596cb6ab56a043c15d27f469307013) from <strong>Mellanox</strong><br />

Technologies. Support is provided for Infiniband HCA’s from <strong>Mellanox</strong> Technologies.<br />

Please visit the <strong>Mellanox</strong> Technologies site 1 to learn more about the ofed release and its components.<br />

Notes<br />

1. http://mellanox.com<br />

v


Chapter 1. Overview<br />

Table 1-1. Summary<br />

Name<br />

Version 5.4<br />

Maintained By<br />

Architecture<br />

Compatible with <strong>Rocks+</strong> 5.4<br />

<strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> (rocks+mlnx-ofed)<br />

Clustercorp<br />

x86_64<br />

The <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll has the following requirements of other rolls. Compatability with all known rolls<br />

is assured, and all known conflicts are listed. There is no assurance of compatiblity with third-party rolls.<br />

Table 1-2. Roll Compatibility<br />

Requires<br />

base<br />

ganglia<br />

hpc<br />

web-server<br />

rocks+core<br />

rocks+kernel<br />

Conflicts<br />

xen<br />

Table 1-3. Platform Compatibility<br />

Physical Hardware<br />

<strong>Rocks+</strong> Private Cloud<br />

Amazon EC2<br />

yes<br />

no<br />

no<br />

1


Chapter 2. Attributes<br />

Table 2-1. Roll Attributes<br />

Name Type Default<br />

ofed a bool TRUE<br />

Notes:<br />

a. Default value created using rocks add appliance attr {compute, login} name value and only affects the<br />

compute, login appliances.<br />

ofed<br />

If TRUE then install the <strong>OFED</strong> driver stack and MPI environments.<br />

2


Chapter 3. Installing <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

This roll can be installed during the frontend installation step of your cluster or you can add the roll to a running<br />

system. The <strong>Rocks+</strong> Core roll contains the foundation for all of <strong>Rocks+</strong> and must always be installed, and<br />

licensed, to use any of the other <strong>Rocks+</strong> rolls.<br />

3.1. Installation on a New System<br />

The <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll is added to a frontend installation in exactly the same manner as other rolls.<br />

Refer also to Section 2.2 of the Rocks® <strong>Users</strong> <strong>Guide</strong> for more information.<br />

Upon selecting Submit, the <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll will be added to the selected rolls on the left. You can then<br />

select other rolls or start the installation as necessary.<br />

3.2. Adding to an Existing Frontend<br />

The <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll can be added onto an already installed frontend. The following procedure, run<br />

3


Chapter 3. Installing <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

as root, will allow you to add the roll.<br />

# rocks add roll rocks+mlnx-ofed-5.4-*.x86_64.disk1.iso<br />

# rocks enable roll rocks+mlnx-ofed<br />

# cd /export/rocks/install<br />

# rocks create distro<br />

# rocks run roll rocks+mlnx-ofed | sh<br />

Reboot your frontend to complete the configuration:<br />

# reboot<br />

Lastly, reinstall your compute nodes:<br />

# rocks run host compute /boot/kickstart/cluster-kickstart<br />

4


Chapter 4. Configuring InfiniBand<br />

4.1. Setting up InfiniBand IPoIB<br />

Follow these instructions for setting up IP over IB (IPoIB). IPoIB is optional, as none of the MPI environments<br />

provided by the <strong>OFED</strong> Roll depend on IPoIB.<br />

Please note that these steps need to be performed before the compute nodes are added to the cluster if you<br />

want the IP addresses to be assigned automatically. It is always possible to assign IP addresses manually at any<br />

time.<br />

4.1.1. Setting up InfiniBand on Compute nodes<br />

Perform the following steps to have IP addresses allocated to your InfiniBand interfaces (ib0) at compute node<br />

discovery time. You will need to have functioning InfiniBand fabric with an active subnet manager as the compute<br />

nodes will attempt to bring up the ib0 interface on first boot.<br />

rocks add network ipoib rocks set network mtu ipoib 65520<br />

for example:<br />

# rocks add network ipoib 172.30.0.0 255.255.0.0<br />

# rocks set network mtu ipoib 65520<br />

The IPoIB device defaults to using ib0. If you wish to change this, perform following steps to set the device.<br />

rocks add appliance attr compute ipoibdevice <br />

for example:<br />

# rocks add appliance attr compute ipoibdevice ib1<br />

4.1.2. Setting up InfiniBand on Headnode<br />

If you have an InfiniBand hca installed in your headnode, you can setup IPoIB on your headnode by running<br />

following command (please choose an IP address within the the ipoib network added above, preferrably at the lower<br />

end of the address range)<br />

# rocks add host interface localhost ib0 ip=172.30.1.10 subnet=ipoib name=‘hostname -s‘-ib<br />

5


Chapter 4. Configuring InfiniBand<br />

bring up the ib0 interface on your headnode<br />

# rocks sync host network localhost<br />

# ifup ib0<br />

4.1.3. Discovering Compute nodes<br />

At this point you can add your compute nodes as normal<br />

# insert-ethers<br />

Choose compute node appliance type and turn on your compute nodes in order you want them assigned in Rocks.<br />

4.2. Setting up the InfiniBand Subnet Manager<br />

An InfiniBand network requires a Subnet Manager to be running in either the Infiniband switch itself (switch<br />

based) or on one of the nodes which is connected to the Infiniband fabric (host based). If the output of ibstat shows<br />

the port state as "Initializing" then you most likely will need to run a host based subnet manager.<br />

At this time the subnetmanager attribute should only be set to true for one host. IE: the headnode, or one of your<br />

compute nodes.<br />

In this example, Rocks attribute subnetmanager is set to true for node compute-0-0. When compute-0-0 is installed<br />

next time, the opensmd service (Subnet Manager) will be turned on.<br />

# rocks add host attr compute-0-0 subnetmanager true<br />

# ssh compute-0-0 /boot/kickstart/cluster-kickstart<br />

In this example, Rocks attribute subnetmanager is set to true for the head node. You will need to reboot the head<br />

node for the setting to take effect.<br />

# rocks add host attr localhost subnetmanager true<br />

# reboot<br />

6


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

The <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong> roll installs Infiniband kernel drivers, libraries, and MPI environments (mvapich,<br />

OpenMPI) during the provisioning process. A series of debug files are provided containing output from the<br />

installation. This can be a helpful tool in diagnosing HCA issues.<br />

5.1. Examining the state of your fabric<br />

The mlnx-ofed Roll contains a number of tools that can be used to debug issues with the HCAs and Infiniband fabric.<br />

The ibstat tool will query basic status of Infiniband device(s). If any of the ports show State: Active the hca<br />

can pass MPI traffic (see Running Intel MPI Benchmark). If ports show State: Initializing, see Setting up the<br />

InfiniBand Subnet Manager.<br />

If you are using iWARP 10 Gigabit Ethernet adapters, you may ignore the output of ibstat. As long as the iWARP<br />

interface (typically eth2) is up and has an IP address it is ready to pass MPI traffic (see Running Intel MPI<br />

Benchmark).<br />

# ibstat<br />

CA ’mlx4_0’<br />

CA type: MT26428<br />

Number of ports: 2<br />

Firmware version: 2.8.600<br />

Hardware version: a0<br />

Node GUID: 0x00237dffff93ef78<br />

System image GUID: 0x00237dffff93ef7b<br />

Port 1:<br />

State: Active<br />

Physical state: LinkUp<br />

Rate: 40<br />

Base lid: 4<br />

LMC: 0<br />

SM lid: 1<br />

Capability mask: 0x02510868<br />

Port GUID: 0x00237dffff93ef79<br />

Port 2:<br />

State: Down<br />

Physical state: Polling<br />

Rate: 10<br />

Base lid: 0<br />

LMC: 0<br />

SM lid: 0<br />

Capability mask: 0x02510868<br />

Port GUID: 0x00237dffff93ef7a<br />

The mlnx-ofed roll provides an Infiniband HCA Self Test Utility hca_self_test.ofed which provides firmware<br />

version, host driver version/state, link status, and other useful information.<br />

# hca_self_test.ofed<br />

---- Performing InfiniBand HCA Self Test ----<br />

7


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

Number of HCAs Detected ................ 1<br />

PCI Device Check ....................... PASS<br />

Kernel Arch ............................ x86_64<br />

Host Driver Version .................... <strong>OFED</strong>-internal-1.5.2-20101219-1546: 1.5.2-2.6.18_194.el5<br />

Host Driver RPM Check .................. PASS<br />

HCA Firmware on HCA #0 ................. v2.8.600<br />

HCA Firmware Check for HCA #0 .......... PASS<br />

Host Driver Initialization ............. PASS<br />

Number of HCA Ports Active ............. 1<br />

Port State of Port #0 on HCA #0 ........ UP 4X QDR<br />

Port State of Port #1 on HCA #0 ........ DOWN<br />

Error Counter Check on HCA #0 .......... PASS<br />

Kernel Syslog Check .................... PASS<br />

Node GUID on HCA #0 .................... 00:23:7d:ff:ff:93:6e:32<br />

------------------ DONE ---------------------<br />

5.2. Debugging<br />

The files are located at /root/ofed-debug.stage*.out.<br />

ofed-debug.stage1.out: debug output of any packages which were removed prior to <strong>OFED</strong> install.<br />

ofed-debug.stage2.out: debug output of <strong>OFED</strong> package install and driver reload.<br />

ofed-debug.stage3.out: debug output of kernel driver rebuild (if required).<br />

ofed-debug.stage4mft.out: debug output of automatic firmware update.<br />

ofed-debug.stage5.out: debug output of subnet manager (if applicable).<br />

5.3. Using MPI-Selector<br />

The mlnx-ofed Roll allows users to easily switch between default MPI implementations. Here are a few simple<br />

commands that users can execute: Display current default MPI version:<br />

$ mpi-selector --query<br />

openmpi_gcc-1.4.2<br />

level:user<br />

Display available versions:<br />

$ mpi-selector --list<br />

mvapich_gcc-1.2.0<br />

mvapich_intel-1.2.0<br />

mvapich_pgi-1.2.0<br />

openmpi_gcc-1.4.2<br />

openmpi_intel-1.4.2<br />

openmpi_pgi-1.4.2<br />

8


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

Switch to another version:<br />

$ mpi-selector --set openmpi_intel-1.4.2<br />

Defaults already exist; overwrite them (y/N) y<br />

$ mpi-selector --query<br />

default:openmpi_intel-1.4.2<br />

level:user<br />

A menu driven version of mpi-selector is also available:<br />

$ mpi-selector-menu<br />

Current system default: none<br />

Current user default: none<br />

"u" and "s" modifiers can be added to numeric and "U"<br />

commands to specify "user" or "system-wide".<br />

1. mvapich_gcc-1.2.0<br />

2. mvapich_intel-1.2.0<br />

3. mvapich_pgi-1.2.0<br />

4. openmpi_gcc-1.4.2<br />

5. openmpi_intel-1.4.2<br />

6. openmpi_pgi-1.4.2<br />

U. Unset default<br />

Q. Quit<br />

Selection (1-9[us], U[us], Q):<br />

More information about using the mpi-selector tool can be found in the man pages (man mpi-selector). Detailed<br />

information on adding and removing your own MPI implemenations is also referenced in the man page.<br />

5.4. Running IMB (Intel MPI Benchmark) to validate your<br />

fabric<br />

IMB is a good all around test to access Infiniband connectivity and performance. It is pre-built for each of the<br />

bundled MPI environments in the mlnx-ofed roll (mvapich, mvapich2, openmpi).<br />

Here is an example IMB run (fill in the names of your compute nodes when creating the hosts file). Note that only the<br />

PingPong test is run. Remove PingPong from the command line to run the full suite of tests.<br />

# useradd imb<br />

# rocks sync users<br />

# su - imb<br />

$ mpi-selector --set=openmpi_gcc-1.4.2<br />

$ logout<br />

# su - imb<br />

$ mpi-selector --query<br />

default:openmpi_gcc-1.4.2<br />

level:user<br />

$ cat > hosts


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

> compute-0-0<br />

> compute-0-1<br />

> EOF<br />

$ mpirun -np ‘wc -l < hosts‘ -machinefile hosts /usr/mpi/gcc/openmpi-1.4.2/tests/IMB-3.2/IMB-MPI1 PingPong<br />

#---------------------------------------------------<br />

# Intel (R) MPI Benchmark Suite V3.2, MPI-1 part<br />

#---------------------------------------------------<br />

# Date : Fri Feb 19 22:49:01 2010<br />

# Machine : x86_64<br />

# System : Linux<br />

# Release : 2.6.18-164.el5<br />

# Version : #1 SMP Tue Aug 18 15:51:48 EDT 2009<br />

# MPI Version : 2.1<br />

# MPI Thread Environment: MPI_THREAD_SINGLE<br />

# New default behavior from Version 3.2 on:<br />

# the number of iterations per message size is cut down<br />

# dynamically when a certain run time (per message size sample)<br />

# is expected to be exceeded. Time limit is defined by variable<br />

# "SECS_PER_SAMPLE" (=> IMB_settings.h)<br />

# or through the flag => -time<br />

# Calling sequence was:<br />

# /usr/mpi/gcc/openmpi-1.4.2/tests/IMB-3.2/IMB-MPI1 PingPong<br />

# Minimum message length in bytes: 0<br />

# Maximum message length in bytes: 4194304<br />

#<br />

# MPI_Datatype : MPI_BYTE<br />

# MPI_Datatype for reductions : MPI_FLOAT<br />

# MPI_Op : MPI_SUM<br />

#<br />

#<br />

# List of Benchmarks to run:<br />

# PingPong<br />

#---------------------------------------------------<br />

# Benchmarking PingPong<br />

# #processes = 2<br />

#---------------------------------------------------<br />

#bytes #repetitions t[usec] Mbytes/sec<br />

0 1000 1.58 0.00<br />

1 1000 1.72 0.56<br />

2 1000 1.73 1.10<br />

4 1000 1.66 2.30<br />

10


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

8 1000 1.69 4.53<br />

16 1000 1.76 8.67<br />

32 1000 1.90 16.03<br />

64 1000 2.05 29.77<br />

128 1000 3.52 34.68<br />

256 1000 3.87 63.15<br />

512 1000 4.12 118.46<br />

1024 1000 4.83 202.38<br />

2048 1000 6.11 319.84<br />

4096 1000 7.48 522.30<br />

8192 1000 10.00 781.18<br />

16384 1000 13.13 1189.80<br />

32768 1000 19.32 1617.74<br />

65536 640 29.64 2108.64<br />

131072 320 55.38 2257.02<br />

262144 160 99.97 2500.85<br />

524288 80 187.50 2666.67<br />

1048576 40 332.99 3003.12<br />

2097152 20 665.78 3004.00<br />

4194304 10 1299.94 3077.06<br />

# All processes entering MPI_Finalize<br />

5.5. Firmware Flash<br />

The mlnx-ofed roll installs Infiniband drivers, MVAPICH, Open MPI and includes an auto firmware flashing routine<br />

during the node kickstart process. A debug file is provided containing output from the auto-installation and update<br />

process. This can be a helpful tool in diagnosing HCA issues. The file is located at<br />

/root/ofed-debug-stage4mft.out.<br />

The following is an example of information included in the debug file:<br />

The auto firmware flashing routine prints out information at the end of the debug file. Here is an example of a<br />

successful firmware update:<br />

version 0.92<br />

210 ini files registered<br />

probing devices...<br />

###<br />

##<br />

#<br />

# hca: mt26428_pci_cr0<br />

#<br />

##<br />

###<br />

11


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

standard firmware for PSID[MT_0D90110009] is None<br />

cx2 firmware for PSID[MT_0D90110009] is 2.8.600<br />

probed fw version 2.8.600<br />

probed hardware version 0xb0<br />

hwversion is 0xb0 and ConnectX2 firmware exists<br />

firmware up-to-date<br />

If the process fails to update in a failsafe mode, useful information may be provided. Here is an example of an error<br />

message related to an invariant sector found on an HCA that will not allow a failsafe update:<br />

probing devices<br />

discovered dev: mt25204_pci_cr0<br />

standard firmware for PSID[MT_03B0120002] is 1.2.0<br />

probed fw version 1.2.917<br />

need to update firmware<br />

mlxburn -dev /dev/mst/mt25204_pci_cr0 -fw fw-25204-rel-1_2_000/fw-25204-rel.mlx -conf fw-25204-rel-1<br />

-I- Generating image ...<br />

Current FW version on flash: 1.2.917<br />

New FW version: 1.2.0<br />

Note: The new FW version is not newer than the current FW version on flash.<br />

Do you want to continue (y/n) [n] : y<br />

Read and verify Invariant Sector<br />

Invariant sector mismatch. Address 0x40<br />

- DIFF DETECTED<br />

in image: 0x15000720, while on flash: 0x14000720<br />

The invariant sector can not be burnt in a failsafe manner.<br />

You can perform the FW update without burning the invariant sector by<br />

by specifying the -skip_is flag.<br />

See FW release notes for details on invariant sector updates.<br />

*** ERROR *** Failsafe burn error: Invariant sector mismatch<br />

-E- Image burn failed: child process exited abnormally<br />

The firmware update failed due to an Invariant Sector error.<br />

------------------------------------------------------------<br />

This could be due to Cisco firmware loaded on the hca or perhaps old <strong>Mellanox</strong> firmware<br />

If you would like to update the firmware, you may use following command.<br />

This command will update the firmware in NON FAILSAFE mode.<br />

If the system loses power during firmware update it could render the hca inoperable.<br />

Proceed with the firmware update at your own risk.<br />

cd /opt/mlnx-ofed/firmware<br />

mlxburn -dev /dev/mst/mt25204_pci_cr0 -fw fw-25204-rel-1_2_000/fw-25204-rel.mlx -conf fw-25204-rel-1<br />

12


Chapter 5. Using <strong>Rocks+</strong> <strong>Mellanox</strong> <strong>OFED</strong><br />

In the case above, you have the option of updating the firmware using the provided commands. The instructions<br />

provided in the above output are only guidelines on updating software images for HCAs. Please consult with your<br />

hardware supplier support team for more information. Additional information about firmware flashing can be found<br />

on the <strong>Mellanox</strong> Technologies Firmware Support and Downloads Site 1<br />

Notes<br />

1. http://www.mellanox.com/support/firmware_download.php<br />

13


Chapter 6. Care and Feeding<br />

6.1. Updating your kernel<br />

Be careful to make sure the kernel is compatible with your ofed version.<br />

6.1.1. Updating the kernel on your compute nodes<br />

To update the kernel on your compute nodes, you can simply copy the kernel rpm’s to<br />

/export/rocks/install/contrib/5.4/x86_64/RPMS and rebuild your distribution followed by reinstalling<br />

your compute nodes.<br />

For example:<br />

# ls -1<br />

kernel-2.6.18-194.32.1.el5.src.rpm<br />

kernel-2.6.18-194.32.1.el5.x86_64.rpm<br />

kernel-devel-2.6.18-194.32.1.el5.x86_64.rpm<br />

kernel-doc-2.6.18-194.32.1.el5.noarch.rpm<br />

kernel-headers-2.6.18-194.32.1.el5.x86_64.rpm<br />

# cp -p kernel*.rpm /export/rocks/install/contrib/5.4/x86_64/RPMS<br />

# cd /export/rocks/install<br />

# rocks create distro<br />

# tentakel /boot/kickstart/cluster-kickstart<br />

6.1.2. Updating the kernel on your head node<br />

You may perform a yum update to install a new kernel on your headnode. Take care not to update any packages<br />

which are provided by the mlnx-ofed roll (list may be found in /opt/mlnx-ofed/RPMS/x86_64).<br />

When the head node is rebooted the ofed kernel drivers will be built for the new kernel and be ready for use. See<br />

/root/ofed-debug.stage3.out after your head node has rebooted to see the output of ofed kernel driver rebuild.<br />

If you want to update the kernel on the headnode to the same as the compute nodes from previous example, perform<br />

following steps.<br />

# cd /export/rocks/install/contrib/5.4/x86_64/RPMS<br />

# rpm -Uvh kernel*.rpm<br />

# reboot<br />

14


6.2. Configuring new appliances with <strong>OFED</strong><br />

Chapter 6. Care and Feeding<br />

The <strong>Rocks+</strong> standard appliance configuration when using the mlnx-ofed roll enables the ofed attribute for the<br />

compute appliance (see below). When this attribute is set, the compute nodes are provisioned with the <strong>OFED</strong> stack to<br />

provide Infiniband support.<br />

# rocks list appliance attr<br />

APPLIANCE ATTR VALUE<br />

frontend: managed false<br />

compute: managed true<br />

compute: ofed true<br />

nas: managed true<br />

network: managed false<br />

power: managed false<br />

ipmi: managed false<br />

The ofed attribute can be enabled for any appliance you wish. For example, if you wish to provision your nas<br />

appliances with ofed then you can add the ofed attribute for the nas appliance. The same technique can be used if you<br />

define your own compute appliances.<br />

# rocks add appliance attr nas ofed true<br />

# rocks list appliance attr<br />

APPLIANCE ATTR VALUE<br />

frontend: managed false<br />

compute: managed true<br />

compute: ofed true<br />

nas: managed true<br />

nas: ofed true<br />

network: managed false<br />

power: managed false<br />

ipmi: managed false<br />

At this point you can simply reinstall the nodes which are of the appropriate appliance type.<br />

15


Appendix A. Copyright<br />

A.1. <strong>Rocks+</strong><br />

Copyright© 2011 Clustercorp Inc. All rights reserved.<br />

This product includes software developed by Clustercorp Inc., these portions may not be modified, copied, or<br />

redistributed without the express written consent of Clustercorp Inc.<br />

A.2. Rocks®<br />

This product includes software developed by the Rocks® Cluster Group at the San Diego Supercomputer Center at<br />

the University of California, San Diego and its contributors. This software is subject to the following additional<br />

copyright;<br />

Rocks(r)<br />

www.rocksclusters.org<br />

version 5.4 (Maverick)<br />

Copyright (c) 2000 - 2010 The Regents of the University of California.<br />

All rights reserved.<br />

Redistribution and use in source and binary forms, with or without<br />

modification, are permitted provided that the following conditions are<br />

met:<br />

1. Redistributions of source code must retain the above copyright<br />

notice, this list of conditions and the following disclaimer.<br />

2. Redistributions in binary form must reproduce the above copyright<br />

notice unmodified and in its entirety, this list of conditions and the<br />

following disclaimer in the documentation and/or other materials provided<br />

with the distribution.<br />

3. All advertising and press materials, printed or electronic, mentioning<br />

features or use of this software must display the following acknowledgement:<br />

"This product includes software developed by the Rocks(r)<br />

Cluster Group at the San Diego Supercomputer Center at the<br />

University of California, San Diego and its contributors."<br />

4. Except as permitted for the purposes of acknowledgment in paragraph 3,<br />

neither the name or logo of this software nor the names of its<br />

authors may be used to endorse or promote products derived from this<br />

software without specific prior written permission. The name of the<br />

software includes the following terms, and any derivatives thereof:<br />

"Rocks", "Rocks Clusters", and "Avalanche Installer". For licensing of<br />

the associated name, interested parties should contact Technology<br />

16


Transfer & Intellectual Property Services, University of California,<br />

San Diego, 9500 Gilman Drive, Mail Code 0910, La Jolla, CA 92093-0910,<br />

Ph: (858) 534-5815, FAX: (858) 534-7345, E-MAIL:invent@ucsd.edu<br />

THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS “AS IS<br />

AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,<br />

THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR<br />

PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS<br />

BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR<br />

CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF<br />

SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR<br />

BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,<br />

WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE<br />

OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN<br />

IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.<br />

Appendix A. Copyright<br />

17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!