29.01.2013 Views

GPFS: Administration and Programming Reference - IRA Home

GPFS: Administration and Programming Reference - IRA Home

GPFS: Administration and Programming Reference - IRA Home

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

seconds to wait for those servers to come up. The decision to wait is controlled by the criteria managed<br />

by the nsdServerWaitTimeWindowOnMount option.<br />

Valid values are between 0 <strong>and</strong> 1200 seconds. The default is 300. A value of zero indicates that no<br />

waiting is done. The interval for checking is 10 seconds. If nsdServerWaitTimeForMount is 0,<br />

nsdServerWaitTimeWindowOnMount has no effect.<br />

The mount thread waits when the daemon delays for safe recovery. The mount wait for NSD servers to<br />

come up, which is covered by this option, occurs after expiration of the recovery wait allows the mount<br />

thread to proceed.<br />

nsdServerWaitTimeWindowOnMount<br />

Specifies a window of time (in seconds) during which a mount can wait for NSD servers as described<br />

for the nsdServerWaitTimeForMount option. The window begins when quorum is established (at<br />

cluster startup or subsequently), or at the last known failure times of the NSD servers required to<br />

perform the mount.<br />

Valid values are between 1 <strong>and</strong> 1200 seconds. The default is 600. If nsdServerWaitTimeForMount is<br />

0, nsdServerWaitTimeWindowOnMount has no effect.<br />

When a node rejoins the cluster after having been removed for any reason, the node resets all the<br />

failure time values that it knows about. Therefore, when a node rejoins the cluster it believes that the<br />

NSD servers have not failed. From the node’s perspective, old failures are no longer relevant.<br />

<strong>GPFS</strong> checks the cluster formation criteria first. If that check falls outside the window, <strong>GPFS</strong> then<br />

checks for NSD server fail times being within the window.<br />

pagepool<br />

Changes the size of the cache on each node. The default value is 64 M. The minimum allowed value is<br />

4 M. The maximum allowed value depends on amount of the available physical memory <strong>and</strong> your<br />

operating system. Specify this value with the character M, for example, 60M.<br />

prefetchThreads<br />

Controls the maximum possible number of threads dedicated to prefetching data for files that are read<br />

sequentially, or to h<strong>and</strong>le sequential write-behind.<br />

The actual degree of parallelism for prefetching is determined dynamically in the <strong>GPFS</strong> daemon. The<br />

minimum value is 2. The maximum value is 104. The default value is 72. The maximum value of<br />

prefetchThreads plus worker1Threads is:<br />

v On 32-bit kernels, 164<br />

v On 64-bit kernels, 550<br />

subnets<br />

Specifies subnets used to communicate between nodes in a <strong>GPFS</strong> cluster.<br />

mmchconfig Comm<strong>and</strong><br />

Enclose the subnets in quotes <strong>and</strong> separate them by spaces. The order in which they are specified<br />

determines the order that <strong>GPFS</strong> uses these subnets to establish connections to the nodes within the<br />

cluster. For example, subnets=″192.168.2.0″ refer to IP addresses 192.168.2.0 through 192.168.2.255<br />

inclusive.<br />

An optional list of cluster names may also be specified, separated by commas. The names may contain<br />

wild cards similar to those accepted by shell comm<strong>and</strong>s. If specified, these names override the list of<br />

private IP addresses. For example, subnets=″10.10.10.0/remote.cluster;192.168.2.0".<br />

This feature cannot be used to establish fault tolerance or automatic failover. If the interface<br />

corresponding to an IP address in the list is down, <strong>GPFS</strong> does not use the next one on the list. For<br />

more information about subnets, see General Parallel File System: Advanced <strong>Administration</strong> Guide <strong>and</strong><br />

search on Using remote access with public <strong>and</strong> private IP addresses.<br />

tiebreakerDisks<br />

Controls whether <strong>GPFS</strong> will use the node quorum with tiebreaker algorithm in place of the regular node<br />

based quorum algorithm. See General Parallel File System: Concepts, Planning, <strong>and</strong> Installation Guide<br />

Chapter 8. <strong>GPFS</strong> comm<strong>and</strong>s 91

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!