02.12.2012 Views

OpenVMS Cluster Systems - OpenVMS Systems - HP

OpenVMS Cluster Systems - OpenVMS Systems - HP

OpenVMS Cluster Systems - OpenVMS Systems - HP

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Cluster</strong> Storage Devices<br />

6.4 MSCP I/O Load Balancing<br />

6.4.6 Overriding MSCP I/O Load Balancing for Special Purposes<br />

In some configurations, you may want to designate one or more systems in your<br />

cluster as the primary I/O servers and restrict I/O traffic on other systems. You<br />

can accomplish these goals by overriding the default load-capacity ratings used by<br />

the MSCP server. For example, if your cluster consists of two Alpha systems and<br />

one VAX 6000-400 system and you want to reduce the MSCP served I/O traffic<br />

to the VAX, you can assign a low MSCP_LOAD value, such as 50, to the VAX.<br />

Because the two Alpha systems each start with a load-capacity rating of 340 and<br />

the VAX now starts with a load-capacity rating of 50, the MSCP served satellites<br />

will direct most of the I/O traffic to the Alpha systems.<br />

6.5 Managing <strong>Cluster</strong> Disks With the Mount Utility<br />

For locally connected disks to be accessible to other nodes in the cluster, the<br />

MSCP server software must be loaded on the computer to which the disks are<br />

connected (see Section 6.3.1). Further, each disk must be mounted with the<br />

Mount utility, using the appropriate qualifier: /CLUSTER, /SYSTEM, or /GROUP.<br />

Mounting multiple disks can be automated with command procedures; a sample<br />

command procedure, MSCPMOUNT.COM, is provided in the SYS$EXAMPLES<br />

directory on your system.<br />

The Mount utility also provides other qualifiers that determine whether a<br />

disk is automatically rebuilt during a remount operation. Different rebuilding<br />

techniques are recommended for data and system disks.<br />

This section describes how to use the Mount utility for these purposes.<br />

6.5.1 Mounting <strong>Cluster</strong> Disks<br />

To mount disks that are to be shared among all computers, specify the MOUNT<br />

command as shown in the following table.<br />

IF... THEN...<br />

At system startup<br />

The disk is attached to a single system<br />

and is to be made available to all other<br />

nodes in the cluster.<br />

The computer has no disks directly<br />

attached to it.<br />

When the system is running<br />

6–24 <strong>Cluster</strong> Storage Devices<br />

Use MOUNT/CLUSTER device-name on the computer to<br />

which the disk is to be mounted. The disk is mounted<br />

on every computer that is active in the cluster at the<br />

time the command executes. First, the disk is mounted<br />

locally. Then, if the mount operation succeeds, the disk is<br />

mounted on other nodes in the cluster.<br />

Use MOUNT/SYSTEM device-name on the computer<br />

for each disk the computer needs to access. The disks<br />

can be attached to a single system or shared disks that<br />

are accessed by an HSx controller. Then, if the mount<br />

operation succeeds, the disk is mounted on the computer<br />

joining the cluster.<br />

You want to add a disk. Use MOUNT/CLUSTER device-name on the computer to<br />

which the disk is to be mounted. The disk is mounted<br />

on every computer that is active in the cluster at the<br />

time the command executes. First, the disk is mounted<br />

locally. Then, if the mount operation succeeds, the disk is<br />

mounted on other nodes in the cluster.<br />

To ensure disks are mounted whenever possible, regardless of the sequence that<br />

systems in the cluster boot (or shut down), startup command procedures should

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!