Oracle Database Oracle Clusterware Installation Guide for HP-UX
Oracle Database Oracle Clusterware Installation Guide for HP-UX
Oracle Database Oracle Clusterware Installation Guide for HP-UX
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Per<strong>for</strong>ming Cluster Diagnostics During <strong>Oracle</strong> <strong>Clusterware</strong> <strong>Installation</strong>s<br />
Cause: <strong>Oracle</strong> <strong>Clusterware</strong> is either not installed, or the <strong>Oracle</strong> <strong>Clusterware</strong><br />
services are not up and running.<br />
Action: Install <strong>Oracle</strong> <strong>Clusterware</strong>, or review the status of your <strong>Oracle</strong><br />
<strong>Clusterware</strong>. Consider restarting the nodes, as doing so may resolve the problem.<br />
Node nodename is unreachable<br />
Cause: Unavailable IP host<br />
Action: Attempt the following:<br />
1. Run the shell command ifconfig -a. Compare the output of this command<br />
with the contents of the /etc/hosts file to ensure that the node IP is listed.<br />
2. Run the shell command nslookup to see if the host is reachable.<br />
3. As the oracle user, attempt to connect to the node with ssh or rsh. If you<br />
are prompted <strong>for</strong> a password, then user equivalence is not set up properly.<br />
Review the section "Configuring SSH or RCP on All Cluster Nodes" on<br />
page 2-20.<br />
PROT-8: Failed to import data from specified file to the cluster registry<br />
Cause: Insufficient space in an existing <strong>Oracle</strong> Cluster Registry device partition,<br />
which causes a migration failure while running rootupgrade. To confirm, look<br />
<strong>for</strong> the error "utopen:12:Not enough space in the backing store" in the log file<br />
$ORA_CRS_HOME/log/hostname/client/ocrconfig_pid.log.<br />
Action: Identify a storage device that has 280 MB or more available space. Locate<br />
the existing raw device name from /var/opt/oracle/srvConfig.loc, and<br />
copy the contents of this raw device to the new device using the command dd.<br />
Time stamp is in the future<br />
Cause: One or more nodes has a different clock time than the local node. If this is<br />
the case, then you may see output similar to the following:<br />
time stamp 2005-04-04 14:49:49 is 106 s in the future<br />
Action: Ensure that all member nodes of the cluster have the same clock time.<br />
A.3 Per<strong>for</strong>ming Cluster Diagnostics During <strong>Oracle</strong> <strong>Clusterware</strong><br />
<strong>Installation</strong>s<br />
If <strong>Oracle</strong> Universal Installer (OUI) does not display the Node Selection page, then<br />
per<strong>for</strong>m clusterware diagnostics by running the olsnodes -v command from the<br />
binary directory in your <strong>Oracle</strong> <strong>Clusterware</strong> home (CRS_home/bin on Linux and<br />
UNIX-based systems, and CRS_home\BIN on Windows-based systems) and analyzing<br />
its output. Refer to your clusterware documentation if the detailed output indicates<br />
that your clusterware is not running.<br />
In addition, use the following command syntax to check the integrity of the Cluster<br />
Manager:<br />
cluvfy comp clumgr -n node_list -verbose<br />
In the preceding syntax example, the variable node_list is the list of nodes in your<br />
cluster, separated by commas.<br />
Troubleshooting the <strong>Oracle</strong> <strong>Clusterware</strong> <strong>Installation</strong> Process A-3