Firewall Concepts and Configuration - HP Operations Manager
Firewall Concepts and Configuration - HP Operations Manager
Firewall Concepts and Configuration - HP Operations Manager
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>HP</strong> OpenView <strong>Operations</strong> for UNIX<br />
<strong>Firewall</strong> <strong>Concepts</strong> <strong>and</strong> <strong>Configuration</strong> Guide<br />
Edition 6<br />
Manufacturing Part Number: none (PDF only)<br />
Version A.08.xx<br />
August 2006<br />
© Copyright 2002-2006 Hewlett-Packard Development Company, L.P.
2<br />
Legal Notices<br />
Warranty.<br />
Hewlett-Packard makes no warranty of any kind with regard to this<br />
document, including, but not limited to, the implied warranties of<br />
merchantability <strong>and</strong> fitness for a particular purpose. Hewlett-Packard<br />
shall not be held liable for errors contained herein or direct, indirect,<br />
special, incidental or consequential damages in connection with the<br />
furnishing, performance, or use of this material.<br />
A copy of the specific warranty terms applicable to your Hewlett-Packard<br />
product can be obtained from your local Sales <strong>and</strong> Service Office.<br />
Restricted Rights Legend.<br />
Use, duplication or disclosure by the U.S. Government is subject to<br />
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in<br />
Technical Data <strong>and</strong> Computer Software clause in DFARS 252.227-7013.<br />
Hewlett-Packard Company<br />
United States of America<br />
Rights for non-DOD U.S. Government Departments <strong>and</strong> Agencies are as<br />
set forth in FAR 52.227-19(c)(1,2).<br />
Copyright Notices.<br />
©Copyright 2002-2006 Hewlett-Packard Development Company, L.P.<br />
No part of this document may be copied, reproduced, or translated to<br />
another language without the prior written consent of Hewlett-Packard<br />
Company. The information contained in this material is subject to<br />
change without notice.
Trademark Notices.<br />
Adobe® is a trademark of Adobe Systems Incorporated.<br />
<strong>HP</strong>-UX Release 10.20 <strong>and</strong> later <strong>and</strong> <strong>HP</strong>-UX Release 11.00 <strong>and</strong> later (in<br />
both 32 <strong>and</strong> 64-bit configurations) on all <strong>HP</strong> 9000 computers are Open<br />
Group UNIX 95 br<strong>and</strong>ed products.<br />
Intel386, Intel80386, Intel486 , <strong>and</strong> Intel80486 are U.S. trademarks of<br />
Intel Corporation.<br />
Intel Itanium Logo: Intel, Intel Inside <strong>and</strong> Itanium are trademarks or<br />
registered trademarks of Intel Corporation in the U.S. <strong>and</strong> other<br />
countries <strong>and</strong> are used under license.<br />
Java <strong>and</strong> all Java based trademarks <strong>and</strong> logos are trademarks or<br />
registered trademarks of Sun Microsystems, Inc. in the U.S. <strong>and</strong> other<br />
countries.<br />
Microsoft® is a U.S. registered trademark of Microsoft Corporation.<br />
MS-DOS® is a U.S. registered trademark of Microsoft Corporation.<br />
Netscape <strong>and</strong> Netscape Navigator are U.S. trademarks of Netscape<br />
Communications Corporation.<br />
OpenView® is a registered U.S. trademark of Hewlett-Packard Company.<br />
Oracle® is a registered U.S. trademark of Oracle Corporation, Redwood<br />
City, California.<br />
OSF, OSF/1, OSF/Motif, Motif, <strong>and</strong> Open Software Foundation are<br />
trademarks of the Open Software Foundation in the U.S. <strong>and</strong> other<br />
countries.<br />
Pentium® is a U.S. registered trademark of Intel Corporation.<br />
SQL*Plus® is a registered U.S. trademark of Oracle Corporation,<br />
Redwood City, California.<br />
UNIX® is a registered trademark of the Open Group.<br />
Windows® <strong>and</strong> MS Windows® are U.S. registered trademarks of<br />
Microsoft Corporation.<br />
All other product names are the property of their respective trademark<br />
or service mark holders <strong>and</strong> are hereby acknowledged.<br />
3
Contents<br />
1. <strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
About this Chapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />
OVO Communication <strong>Concepts</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />
HTTPS Agent <strong>and</strong> Management Server Communication . . . . . . . . . . . . . . . . . . . . . 27<br />
DCE Agent <strong>and</strong> Management Server Communication. . . . . . . . . . . . . . . . . . . . . . . . 29<br />
OVO Heartbeat Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
Normal Heartbeat Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
RPC Only Heartbeat Polling (for <strong>Firewall</strong>s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
Agent Sends Live Packets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
Communication Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32<br />
HTTPS/TCP Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
DCE/UDP Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
DCE/TCP Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />
Microsoft RPC Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
Sun RPC Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
NCS Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />
Configuring OVO for <strong>Firewall</strong> Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />
Special <strong>Configuration</strong>s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
Motif User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
OVO Java User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
Configuring Ports for the Java GUI or Secure Java GUI . . . . . . . . . . . . . . . . . . . . 36<br />
Message Forwarding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
Communication <strong>Concepts</strong> in Message Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />
Configuring Message Forwarding in <strong>Firewall</strong> Environments . . . . . . . . . . . . . . . . 40<br />
VP390/VP400 in <strong>Firewall</strong> Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />
Network Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />
Address Translation of Duplicate Identical IP Ranges . . . . . . . . . . . . . . . . . . . . . . . 44<br />
Known Issues in NAT Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
FTP Does Not Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
2. Advanced <strong>Configuration</strong><br />
Special <strong>Configuration</strong>s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
ICMP (DCE Agents Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
SNMP Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />
OVO Agent Installation in <strong>Firewall</strong> Environments . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />
5
Contents<br />
3. Configuring HTTPS Nodes<br />
Specifying Client Port Ranges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53<br />
Management Server <strong>and</strong> Managed Node Port Settings. . . . . . . . . . . . . . . . . . . . . . . . . 54<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes without a Proxy. . . . . . . . . . . . . . . . . . . . . . 55<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes with Proxies. . . . . . . . . . . . . . . . . . . . . . . . . 56<br />
Configuring the OVO Management Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />
Configuring OVO Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />
Systems with Multiple IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61<br />
HTTPS Agents <strong>and</strong> Network Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
Address Translation of Outside Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />
Address Translation of Inside Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />
Address Translation of Inside <strong>and</strong> Outside Addresses. . . . . . . . . . . . . . . . . . . . . . . . 64<br />
IP Masquerading or Port Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65<br />
4. Configuring DCE Nodes<br />
Management Server <strong>and</strong> Managed Node Port Settings. . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
Configuring a <strong>Firewall</strong> for DCE Nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72<br />
Configuring the OVO Management Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />
Configuring OVO Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />
Checking Communication Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
Verifying Communication Settings of the Management Server. . . . . . . . . . . . . . . . . 78<br />
Verifying Communication Settings of Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . 78<br />
Checking the Endpoint Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78<br />
Windows Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80<br />
Communicating with a Windows Managed Node Outside the <strong>Firewall</strong> . . . . . . . . . . 80<br />
Communication Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82<br />
DCE/UDP Communication Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82<br />
NCS Communication Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
Sun RPC Communication Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83<br />
MC/ServiceGuard in <strong>Firewall</strong> Environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85<br />
<strong>Configuration</strong> Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87<br />
Distributing the <strong>Configuration</strong> in an RPC Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87<br />
Embedded Performance Component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88<br />
Configuring Ports for the Embedded Performance Component . . . . . . . . . . . . . . . . . 89<br />
Configuring the Embedded Performance Component . . . . . . . . . . . . . . . . . . . . . . . . 91<br />
6
Contents<br />
Configuring Reporter <strong>and</strong>/or Performance <strong>Manager</strong> . . . . . . . . . . . . . . . . . . . . . . . . . 91<br />
Configuring Reporter/Performance <strong>Manager</strong> With HTTP Proxy. . . . . . . . . . . . . . . . 93<br />
Configuring Reporter/Performance <strong>Manager</strong> Without HTTP Proxy . . . . . . . . . . . . . 94<br />
Changing the Default Port of the Local Location Broker . . . . . . . . . . . . . . . . . . . . . . 95<br />
Systems with Multiple IP Addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />
Systems Installed with OpenView <strong>Operations</strong> for Windows . . . . . . . . . . . . . . . . . . . 96<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
Content Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />
Content Filtering for OVO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98<br />
Combining Content Filtering <strong>and</strong> Port Restrictions. . . . . . . . . . . . . . . . . . . . . . . . . 100<br />
DCE Agents <strong>and</strong> Network Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />
Address Translation of Outside Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />
Address Translation of Inside Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />
Configuring the Agent for the NAT Environment. . . . . . . . . . . . . . . . . . . . . . . . . 103<br />
Address Translation of Inside <strong>and</strong> Outside Addresses. . . . . . . . . . . . . . . . . . . . . . . 104<br />
Configuring the Agent for the NAT Environment. . . . . . . . . . . . . . . . . . . . . . . . . 105<br />
Setting Up the Responsible <strong>Manager</strong>s File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105<br />
IP Masquerading or Port Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />
5. DCE RPC Communication without Using<br />
Endpoint Mappers<br />
About this Chapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />
<strong>Concepts</strong> of Current OVO Communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers . . . . . . . . . . 112<br />
Objectives for DCE Communication without Using Endpoint Mappers for OVO. . 113<br />
Port Requirements for Remote Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />
Port Requirements for Manual Template <strong>and</strong> Instrumentation Deployment. . . . . 116<br />
Communication <strong>Concepts</strong>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />
Support Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120<br />
OVO Components Affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121<br />
OVO Components Not Affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />
<strong>Configuration</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />
Setting of Variables for Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />
Configuring Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />
RPC Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />
7
Contents<br />
8<br />
Example of a port configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126<br />
RPC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127<br />
Example opcinfo or nodeinfo File <strong>Configuration</strong> . . . . . . . . . . . . . . . . . . . . . . . 128<br />
Configuring Management Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />
RPC Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />
Comm<strong>and</strong>s Examples for Setting Port on an OVO Management Server. . . . . 130<br />
RPC Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130<br />
Example <strong>Configuration</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132<br />
Server Port Specification File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />
File Syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />
Example of an opcsvinfo File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />
File Modification Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />
Internal Process H<strong>and</strong>ling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />
Variable Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139<br />
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141<br />
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143<br />
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143<br />
Tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145<br />
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149<br />
A. Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
General Notes on Port Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
RPC Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
RPC Clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />
TCP Socket Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />
Port Usage on the Management Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155<br />
Distribution Adapter (opcbbcdist) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Installation/Upgrade/Patch Tool (ovdeploy). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Certificate Server (ovcs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Communication Utility (bbcutil) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Display <strong>Manager</strong> (12000) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Message Receiver (12001). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Distribution <strong>Manager</strong> (12002) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />
Communication <strong>Manager</strong> (12003) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
Forward <strong>Manager</strong> (12004-12005) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
Request Sender (12006-12040) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
Remote Agent Tool (12041-12050) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Contents<br />
TCP Socket Server (12051-12060) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
NT Virtual Terminal (12061) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />
Troubleshooting Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161<br />
Defining the Size of the Port Range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161<br />
Monitoring Nodes Inside <strong>and</strong> Outside the <strong>Firewall</strong> . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
Various Agent Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />
Network Tuning for <strong>HP</strong>-UX 10.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163<br />
Network Tuning for <strong>HP</strong>-UX 11.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164<br />
Network Tuning for Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166<br />
Tracing of the <strong>Firewall</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167<br />
Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168<br />
B. OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
<strong>Configuration</strong> Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171<br />
Port Usage on Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171<br />
OVO Variables Used with HTTPS Agents <strong>and</strong> <strong>Firewall</strong>s . . . . . . . . . . . . . . . . . . . . . . 173<br />
SERVER_PORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173<br />
SERVER_BIND_ADDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173<br />
CLIENT_PORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174<br />
CLIENT_BIND_ADDR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174<br />
PROXY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174<br />
HTTPS Managed Node Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
CLIENT_BIND_ADDR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
CLIENT_PORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175<br />
PROXY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176<br />
SERVER_BIND_ADDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />
C. OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
<strong>Configuration</strong> Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181<br />
OVO Variables Used with DCE Agents <strong>and</strong> <strong>Firewall</strong>s. . . . . . . . . . . . . . . . . . . . . . . . . 182<br />
OPC_AGENT_NAT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183<br />
OPC_COMM_PORT_RANGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183<br />
OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183<br />
OPC_DIST_MODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184<br />
OPC_MAX_PORT_RETRIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184<br />
OPC_RESTRICT_TO_PROCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185<br />
OPC_RPC_ONLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185<br />
Managed Node Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186<br />
9
Contents<br />
10<br />
CLIENT_BIND_ADDR() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186<br />
CLIENT_PORT() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
PROXY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187<br />
SERVER_BIND_ADDR() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
Communication Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
DCE/UDP Communication Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />
NCS Communication Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
Sun RPC Communication Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189<br />
SERVER_PORT() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190<br />
Port Usage on Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191<br />
Control Agent (13001). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191<br />
Distribution Agent (13011-13013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
Message Agent (13004-13006) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
Communication Agent (13007). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
NT Virtual Terminal (13008-13009) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
Embedded Performance Component (14000-14003) . . . . . . . . . . . . . . . . . . . . . . . . . 192<br />
Troubleshooting Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193<br />
When All Assigned Ports Are in Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193<br />
Error Messages for Unavailable Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194<br />
When the Server Does not H<strong>and</strong>le Port Ranges Automatically. . . . . . . . . . . . . . . . 196<br />
Error Messages for Server Port H<strong>and</strong>ling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196<br />
Known Issues in NAT Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198<br />
Disabling Remote Actions Also Disables Operator-Initiated Actions . . . . . . . . . 198<br />
Current Usage of the Port Range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199<br />
Communication Issues with NT Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200<br />
<strong>HP</strong>-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Printing History<br />
The printing date <strong>and</strong> part number of the manual indicate the edition of<br />
the manual. The printing date will change when a new edition is printed.<br />
Minor changes may be made at reprint without changing the printing<br />
date. The part number of the manual will change when extensive<br />
changes are made.<br />
Manual updates may be issued between editions to correct errors or<br />
document product changes. To ensure that you receive the updated or<br />
new editions, you should subscribe to the appropriate product support<br />
service. See your <strong>HP</strong> sales representative for details.<br />
Table 1 Edition History<br />
First Edition: May 1999<br />
Second Edition: October 1999<br />
Third Edition: August 2000<br />
Third Edition (revised): July 2001<br />
Fourth Edition: January 2002<br />
Fifth Edition: July 2004<br />
Fifth Edition (revised): September 2004<br />
Sixth Edition: August 2006<br />
11
Conventions<br />
The following typographical conventions are used in this manual.<br />
Table 2 Typographical Conventions<br />
Font Meaning Example<br />
Italic Book or manual titles, <strong>and</strong> man page<br />
names<br />
Refer to the OVO Administrator’s<br />
Reference <strong>and</strong> the opc(1M) manpage<br />
for more information.<br />
Emphasis You must follow these steps.<br />
Variable that you must supply when<br />
entering a comm<strong>and</strong><br />
At the prompt, enter rlogin<br />
username.<br />
Parameters to a function The oper_name parameter returns<br />
an integer response.<br />
Bold New terms The HTTPS agent observes...<br />
Computer Text <strong>and</strong> other items on the<br />
computer screen<br />
The following system message<br />
displays:<br />
Are you sure you want to<br />
remove current group?<br />
Comm<strong>and</strong> names Use the grep comm<strong>and</strong> ...<br />
Function names Use the opc_connect() function to<br />
connect ...<br />
File <strong>and</strong> directory names /opt/OV/bin/OpC/<br />
Process names Check to see if opcmona is running.<br />
Window/dialog-box names In the Add Logfile window ...<br />
Menu name followed by a colon (:)<br />
means that you select the menu,<br />
then the item. When the item is<br />
followed by an arrow (->), a<br />
cascading menu follows.<br />
Select Actions: Filtering -><br />
All Active Messages from the<br />
menu bar.<br />
13
Table 2 Typographical Conventions (Continued)<br />
Computer<br />
Bold<br />
14<br />
Font Meaning Example<br />
Text that you enter At the prompt, enter ls -l<br />
Keycap Keyboard keys Press Return.<br />
[Button] Buttons in the user interface Click [OK].
OVO Documentation Map<br />
<strong>HP</strong> OpenView <strong>Operations</strong> (OVO) provides a set of manuals <strong>and</strong> online<br />
help that help you to use the product <strong>and</strong> to underst<strong>and</strong> the concepts<br />
underlying the product. This section describes what information is<br />
available <strong>and</strong> where you can find it.<br />
Electronic Versions of the Manuals<br />
All the manuals are available as Adobe Portable Document Format<br />
(PDF) files in the documentation directory on the OVO product CD-ROM.<br />
With the exception of the OVO Software Release Notes, all the manuals<br />
are also available in the following OVO web-server directory:<br />
http://:3443/ITO_DOC//manuals/*.pdf<br />
In this URL, is the fully-qualified hostname of<br />
your management server, <strong>and</strong> st<strong>and</strong>s for your system language,<br />
for example, C for the English environment <strong>and</strong> japanese for the<br />
Japanese environment.<br />
Alternatively, you can download the manuals from the following website:<br />
http://ovweb.external.hp.com/lpe/doc_serv<br />
Watch this website regularly for the latest edition of the OVO Software<br />
Release Notes, which gets updated every 2-3 months with the latest<br />
news such as additionally supported OS versions, latest patches <strong>and</strong> so<br />
on.<br />
15
16<br />
OVO Manuals<br />
This section provides an overview of the OVO manuals <strong>and</strong> their<br />
contents.<br />
Table 3 OVO Manuals<br />
Manual Description Media<br />
OVO Installation Guide for<br />
the Management Server<br />
Designed for administrators who install OVO software<br />
on the management server <strong>and</strong> perform the initial<br />
configuration.<br />
This manual describes:<br />
• Software <strong>and</strong> hardware requirements<br />
Software installation <strong>and</strong> de-installation<br />
instructions<br />
<strong>Configuration</strong> defaults<br />
OVO <strong>Concepts</strong> Guide Provides you with an underst<strong>and</strong>ing of OVO on two<br />
levels. As an operator, you learn about the basic<br />
structure of OVO. As an administrator, you gain an<br />
insight into the setup <strong>and</strong> configuration of OVO in your<br />
own environment.<br />
OVO Administrator’s<br />
Reference<br />
OVO DCE Agent <strong>Concepts</strong><br />
<strong>and</strong> <strong>Configuration</strong> Guide<br />
OVO HTTPS Agent<br />
<strong>Concepts</strong> <strong>and</strong> <strong>Configuration</strong><br />
Guide<br />
OVO Reporting <strong>and</strong><br />
Database Schema<br />
OVO Entity Relationship<br />
Diagrams<br />
Designed for administrators who install OVO on the<br />
managed nodes <strong>and</strong> are responsible for OVO<br />
administration <strong>and</strong> troubleshooting. Contains<br />
conceptual <strong>and</strong> general information about the OVO<br />
DCE/NCS-based managed nodes.<br />
Provides platform-specific information about each<br />
DCE/NCS-based managed-node platform.<br />
Provides platform-specific information about each<br />
HTTPS-based managed-node platform.<br />
Provides a detailed description of the OVO database<br />
tables, as well as examples for generating reports from<br />
the OVO database.<br />
Provides you with an overview of the relationships<br />
between the tables <strong>and</strong> the OVO database.<br />
Hardcopy<br />
PDF<br />
Hardcopy<br />
PDF<br />
PDF only<br />
PDF only<br />
PDF only<br />
PDF only<br />
PDF only
Table 3 OVO Manuals (Continued)<br />
Manual Description Media<br />
OVO Java GUI Operator’s<br />
Guide<br />
Service Navigator <strong>Concepts</strong><br />
<strong>and</strong> <strong>Configuration</strong> Guide<br />
Provides you with a detailed description of the OVO<br />
Java-based operator GUI <strong>and</strong> the Service Navigator.<br />
This manual contains detailed information about general<br />
OVO <strong>and</strong> Service Navigator concepts <strong>and</strong> tasks for OVO<br />
operators, as well as reference <strong>and</strong> troubleshooting<br />
information.<br />
Provides information for administrators who are<br />
responsible for installing, configuring, maintaining, <strong>and</strong><br />
troubleshooting the <strong>HP</strong> OpenView Service Navigator.<br />
This manual also contains a high-level overview of the<br />
concepts behind service management.<br />
OVO Software Release Notes Describes new features <strong>and</strong> helps you:<br />
OVO Supplementary Guide<br />
to MPE/iX Templates<br />
Managing Your Network<br />
with <strong>HP</strong> OpenView Network<br />
Node <strong>Manager</strong><br />
Compare features of the current software with<br />
features of previous versions.<br />
Determine system <strong>and</strong> software compatibility.<br />
Solve known problems.<br />
Describes the message source templates that are<br />
available for the MPE/iX managed nodes. This guide is<br />
not available for OVO on Solaris.<br />
Designed for administrators <strong>and</strong> operators. This manual<br />
describes the basic functionality of the <strong>HP</strong> OpenView<br />
Network Node <strong>Manager</strong>, which is an embedded part of<br />
OVO.<br />
OVO Database Tuning This ASCII file is located on the OVO management<br />
server at the following location:<br />
/opt/OV/ReleaseNotes/opc_db.tuning<br />
PDF only<br />
Hardcopy<br />
PDF<br />
PDF only<br />
PDF only<br />
Hardcopy<br />
PDF<br />
ASCII<br />
17
18<br />
Additional OVO-related Products<br />
This section provides an overview of the OVO-related manuals <strong>and</strong> their<br />
contents.<br />
Table 4 Additional OVO-related Manuals<br />
Manual Description Media<br />
<strong>HP</strong> OpenView <strong>Operations</strong> for UNIX Developer’s Toolkit<br />
If you purchase the <strong>HP</strong> OpenView <strong>Operations</strong> for UNIX Developer’s Toolkit, you receive the full OVO<br />
documentation set, as well as the following manuals:<br />
OVO Application Integration<br />
Guide<br />
Suggests several ways in which external applications can<br />
be integrated into OVO.<br />
OVO Developer’s Reference Provides an overview of all the available application<br />
programming interfaces (APIs).<br />
<strong>HP</strong> OpenView Event Correlation Designer for NNM <strong>and</strong> OVO<br />
Hardcopy<br />
PDF<br />
Hardcopy<br />
If you purchase <strong>HP</strong> OpenView Event Correlation Designer for NNM <strong>and</strong> OVO, you receive the<br />
following additional documentation. Note that <strong>HP</strong> OpenView Event Correlation Composer is an<br />
integral part of NNM <strong>and</strong> OVO. OV Composer usage in the OVO context is described in the OS-SPI<br />
documentation.<br />
<strong>HP</strong> OpenView ECS<br />
Configuring Circuits for<br />
NNM <strong>and</strong> OVO<br />
Explains how to use the ECS Designer product in the<br />
NNM <strong>and</strong> OVO environments.<br />
PDF<br />
Hardcopy<br />
OVO Online Information<br />
The following information is available online.<br />
Table 5 OVO Online Information<br />
Online Information Description<br />
<strong>HP</strong> OpenView <strong>Operations</strong><br />
Administrator’s Guide to<br />
Online Information<br />
<strong>HP</strong> OpenView <strong>Operations</strong><br />
Operator’s Guide to Online<br />
Information<br />
<strong>HP</strong> OpenView <strong>Operations</strong><br />
Java GUI Online<br />
Information<br />
<strong>HP</strong> OpenView <strong>Operations</strong><br />
Man Pages<br />
Context-sensitive help system contains detailed help for each window<br />
of the OVO administrator Motif GUI, as well as step-by-step<br />
instructions for performing administrative tasks.<br />
Context-sensitive help system contains detailed help for each window<br />
of the OVO operator Motif GUI, as well as step-by-step instructions<br />
for operator tasks.<br />
HTML-based help system for the OVO Java-based operator GUI <strong>and</strong><br />
Service Navigator. This help system contains detailed information<br />
about general OVO <strong>and</strong> Service Navigator concepts <strong>and</strong> tasks for<br />
OVO operators, as well as reference <strong>and</strong> troubleshooting information.<br />
Manual pages available online for OVO. These manual pages are also<br />
available in HTML format.<br />
To access these pages, go to the following location (URL) with your<br />
web browser:<br />
http://:3443/ITO_MAN<br />
In this URL, the variable is the fully-qualified<br />
hostname of your management server. Note that the man pages for<br />
the OVO HTTPS-agent are installed on each managed node.<br />
19
About OVO Online Help<br />
This preface describes online documentation for the <strong>HP</strong> OpenView<br />
<strong>Operations</strong> (OVO) Motif <strong>and</strong> the Java operator graphical user interfaces<br />
(GUIs).<br />
Online Help for the Motif GUI<br />
Online information for the <strong>HP</strong> OpenView <strong>Operations</strong> (OVO) Motif<br />
graphical user interface (GUI) consists of two separate volumes, one for<br />
operators <strong>and</strong> one for administrators. In the operator’s volume you will<br />
find the <strong>HP</strong> OpenView OVO Quick Start, describing the main operator<br />
windows.<br />
Types of Online Help<br />
The operator <strong>and</strong> administrator volumes include the following types of<br />
online help:<br />
❏ Task Information<br />
Information you need to perform tasks, whether you are an operator<br />
or an administrator.<br />
❏ Icon Information<br />
Popup menus <strong>and</strong> reference information about OVO icons. You access<br />
this information with a right-click of your mouse button.<br />
❏ Error Information<br />
Information about errors displayed in the OVO Error Information<br />
window. You can access context-sensitive help when an error occurs.<br />
Or you can use the number provided in an error message to perform<br />
a keyword search within the help system.<br />
❏ Search Utility<br />
Index search utility that takes you directly to topics by name.<br />
❏ Glossary<br />
Glossary of OVO terminology.<br />
21
22<br />
❏ Help Instructions<br />
Instructions about the online help system itself for new users.<br />
❏ Printing Facility<br />
Printing facility, which enables you to print any or all topics in the<br />
help system. (An <strong>HP</strong> LaserJet printer or a compatible printer device<br />
is required to print graphics.)<br />
To Access Online Help<br />
You can access the help system in any of the following ways:<br />
❏ F1 Key<br />
Press F1 while the cursor is in any active text field or on any active<br />
button.<br />
❏ Help Button<br />
Click [Help] at the bottom of any window.<br />
❏ Help Menu<br />
Open the drop-down Help menu from the menu bar.<br />
❏ Right Mouse Click<br />
Click a symbol, then right-click the mouse button to access the Help<br />
menu.<br />
You can then select task lists, which are arranged by activity, or window<br />
<strong>and</strong> field lists. You can access any topic in the help volume from every<br />
help screen. Hyperlinks provide related information on other help topics.<br />
You can also access context-sensitive help in the Message Browser <strong>and</strong><br />
Message Source Templates window. After selecting Help: On Context<br />
from the menu, the cursor changes into a question mark, which you can<br />
then position over the area about which you want help. When you click<br />
the mouse button, the corresponding help page is displayed in its help<br />
window.
Online Help for the Java GUI <strong>and</strong> Service<br />
Navigator<br />
The online help for the <strong>HP</strong> OpenView <strong>Operations</strong> (OVO) Java graphical<br />
user interface (GUI), including Service Navigator, helps operators to<br />
become familiar with <strong>and</strong> use the OVO product.<br />
Types of Online Help<br />
The online help for the OVO Java GUI includes the following<br />
information:<br />
❏ Tasks<br />
Step-by-step instructions.<br />
❏ <strong>Concepts</strong><br />
Introduction to the key concepts <strong>and</strong> features.<br />
❏ References<br />
Detailed information about the product.<br />
❏ Troubleshooting<br />
Solutions to common problems you might encounter while using the<br />
product.<br />
❏ Index<br />
Alphabetized list of topics to help you find the information you need,<br />
quickly <strong>and</strong> easily.<br />
Viewing a Topic<br />
To view any topic, open a folder in the left frame of the online<br />
documentation window, then click the topic title. Hyperlinks provide<br />
access to related help topics.<br />
23
24<br />
Accessing the Online Help<br />
To access the help system, select Help: Contents from the menu bar of<br />
the Java GUI. A web browser opens <strong>and</strong> displays the help contents.<br />
NOTE To access online help for the Java GUI, you must first configure OVO to<br />
use your preferred browser.
1 <strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Chapter 1 25
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
About this Chapter<br />
26<br />
About this Chapter<br />
This chapter describes how to setup <strong>and</strong> configure OVO in a firewall<br />
environment. It describes what steps need to be performed on the OVO<br />
management server <strong>and</strong> on the firewall to allow communication to an<br />
agent outside of the firewall.<br />
This document is not based on any specific firewall software. The<br />
configuration actions should be easy to adapt to any firewall software.<br />
Knowledge of OVO <strong>and</strong> firewall administration is required to underst<strong>and</strong><br />
this chapter.<br />
Naming Conventions<br />
Table 1-1 specifies the naming conventions that have been applied to the<br />
filter rules.<br />
Table 1-1 Naming Conventions Used in Filter Rules<br />
Name Definition<br />
HTTPS NODE Managed node where a real HTTPS agent is available.<br />
DCE NODE Managed node where a real DCE agent is available.<br />
JAVA GUI System that has the Java GUI installed.<br />
MGD NODE Managed node of any node type.<br />
MGMT SRV OVO management server.<br />
NCS NODE Managed node where an NCS agent is available.<br />
NT NODE Managed node running MS Windows.<br />
PACKAGE IP Virtual IP address of the MC/ServiceGuard cluster node .<br />
PERFORMANCE MANAGER System where <strong>HP</strong> OpenView Performance <strong>Manager</strong> is installed.<br />
PHYS IP NODE Physical IP address of the MC/ServiceGuard cluster node .<br />
PROXY System that serves as HTTP proxy.<br />
REPORTER System where <strong>HP</strong> OpenView Reporter is installed.<br />
UX NODE Managed node running any kind of UNIX system.<br />
Chapter 1
OVO Communication <strong>Concepts</strong><br />
HTTPS Agent <strong>and</strong> Management Server<br />
Communication<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
The basic communication model between OVO HTTPS agents <strong>and</strong> OVO<br />
management server is shown in Figure 1-1 below.<br />
Figure 1-1 HTTPS Agent Components <strong>and</strong> Responsibilities at Runtime<br />
Agent software <strong>and</strong> valid certificates installed<br />
Table on page 28 describes the communication processes shown in<br />
Figure 1-1.<br />
Chapter 1 27
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
Table 1-2 Communication Model Process Descriptions<br />
28<br />
Process Name Full Name Description<br />
ovbbccb Communication Broker HTTPS-RPC server.<br />
opcmsga Message Agent Sends outgoing messages to the server.<br />
opcacta Action Agent<br />
ovcd Control Component Controls the agents. H<strong>and</strong>les incoming requests.<br />
ovconfd <strong>Configuration</strong> <strong>and</strong><br />
Deployment component<br />
ovcs Certificate server on OVO<br />
mangement server<br />
Distribution data from the server.<br />
Creates certificates <strong>and</strong> a private keys for<br />
authentication in secure communication.<br />
opcmsgrb Message Receiver Receives incoming messages <strong>and</strong> action<br />
responses from the agents.<br />
coda Embedded Performance<br />
Component<br />
The embedded performance component collects<br />
performance counter <strong>and</strong> instance data from the<br />
operating system.<br />
opcbbcdist Distribution Adapter Controls configuration deployment to HTTPS<br />
nodes.<br />
opcragt Remote Agent Tool An RPC client that contacts the Endpoint<br />
Mapper <strong>and</strong> the Control Agent of all the agents.<br />
ovoareqsdr Request Sender Sends outgoing requests to the agents. H<strong>and</strong>les<br />
the heartbeat polling.<br />
For additional information on configuration of HTTPS agents, refer to<br />
the OVO HTTPS Agent <strong>Concepts</strong> <strong>and</strong> <strong>Configuration</strong> Guide.<br />
Chapter 1
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
DCE Agent <strong>and</strong> Management Server Communication<br />
The basic communication model between OVO agents <strong>and</strong> OVO<br />
management server is shown in Figure 1-2 below.<br />
Figure 1-2 Example Communications Model Process<br />
Endpoint Mapper<br />
RPC Server<br />
Socket Server<br />
RPC & Socket Client<br />
Server<br />
ovoareqsdr<br />
opccmm<br />
opcmsgrd<br />
<strong>Firewall</strong><br />
inside outside<br />
Table on page 30 describes the communication processes shown in<br />
Figure 1-2.<br />
Chapter 1 29<br />
rpcd<br />
opcdistm<br />
opctss<br />
Console<br />
Reporter<br />
Performance<br />
Agent<br />
rpcd<br />
opcctla<br />
opccma<br />
opcmsga<br />
opcdista<br />
coda
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
Table 1-3 Communication Model Process Descriptions<br />
30<br />
Process Name Full Name Description<br />
coda Embedded Performance<br />
Component<br />
The embedded performance component collects<br />
performance counter <strong>and</strong> instance data from the<br />
operating system.<br />
opccma Communication Agent H<strong>and</strong>les bulk transfer requests from the server.<br />
opccmm Communication <strong>Manager</strong> H<strong>and</strong>les bulk transfer requests from the agent.<br />
opcctla Control Agent Controls the agents. H<strong>and</strong>les incoming requests.<br />
opcdista Distribution Agent Pulls distribution data from the server.<br />
opcdistm Distribution <strong>Manager</strong> H<strong>and</strong>les distribution requests from the agents.<br />
opcmsga Message Agent Sends outgoing messages to the server.<br />
opcmsgrd Message Receiver Receives incoming messages <strong>and</strong> action<br />
responses from the agents.<br />
opctss TCP Socket Server Serves a TCP Socket connection for the<br />
distribution data.<br />
ovoareqsdr Request Sender Sends outgoing requests to the agents. H<strong>and</strong>les<br />
the heartbeat polling.<br />
rpcd RPC daemon This is the endpoint mapper.<br />
opcragt Remote Agent Tool An RPC client that contacts the Endpoint<br />
Mapper <strong>and</strong> the Control Agent of all the agents.<br />
Performance<br />
<strong>Manager</strong><br />
Performance <strong>Manager</strong> Graphical analysis <strong>and</strong> planning tool. It is<br />
designed to analyze <strong>and</strong> project future resource<br />
utilization <strong>and</strong> performance trends.<br />
Reporter Reporter Management reporting tool that automatically<br />
transforms the data captured by OVO agents<br />
into management information.<br />
Chapter 1
OVO Heartbeat Monitoring<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
There are different types of OVO heartbeat monitoring that can be<br />
configured per node in the OVO Node Bank.<br />
❏ Normal<br />
❏ RPC Only (for <strong>Firewall</strong>s)<br />
❏ Agent Sends Alive Packets<br />
Normal Heartbeat Monitoring<br />
If normal heartbeat monitoring is configured, the server first attempts to<br />
contact the node using ICMP packages. If this succeeds, it will continue<br />
to do the heartbeat monitoring using RPC calls. When an RPC call fails,<br />
it will use the ICMP packages to find out if, at least, the system is alive.<br />
As soon as this succeeds, the RPC calls are tried again.<br />
RPC Only Heartbeat Polling (for <strong>Firewall</strong>s)<br />
Since in firewall environments ICMP usually gets blocked, the RPC Only<br />
heartbeat monitoring option configures the server so that only RPC calls<br />
are used. Since RPC connections must be allowed through the firewall,<br />
this will work even if ICMP gets blocked.<br />
The disadvantage is that in the event of a system outage, the network<br />
load is higher than with normal heartbeat monitoring because the RPC<br />
connection is still being tried.<br />
Agent Sends Live Packets<br />
By selecting this option, the agent can be triggered to send ICMP<br />
packages to the server reporting that the agent is alive. When such an<br />
alive package is received at the server, it will reset the polling interval<br />
there. If the polling interval expires without an alive package arriving,<br />
the server will start the usual polling mechanism as configured to find<br />
the agent’s status.<br />
If alive packages are configured, ICMP packages are sent at 2/3 of the<br />
configured heartbeat monitoring interval. This will guarantee that an<br />
alive package will arrive at the server before the configured interval is<br />
over.<br />
Chapter 1 31
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
32<br />
In a firewall environment this option is not advised for nodes outside the<br />
firewall because ICMP can get blocked there. For nodes inside the<br />
firewall this option is recommended since it will avoid RPC calls being<br />
made from the server to nodes inside the firewall <strong>and</strong> blocking ports.<br />
Communication Types<br />
Each OVO node can be configured to use a specific communication type.<br />
Most of the commonly used agent platforms support HTTPS <strong>and</strong> DCE.<br />
Some support their own communication types, for example, Microsoft<br />
Windows <strong>and</strong> Novell NetWare. Microsoft’s RPC implementation is<br />
mostly compatible to DCE, while for Novell NetWare nodes a special<br />
RPC stack is included.<br />
The following communication types are supported:<br />
❏ HTTPS/TCP<br />
❏ DCE/UDP<br />
❏ DCE/TCP<br />
❏ Microsoft RPC<br />
❏ Sun RPC<br />
❏ NCS<br />
Chapter 1
HTTPS/TCP Communication<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
HTTPS 1.1 based communications is the latest communication<br />
technology used by <strong>HP</strong> OpenView products <strong>and</strong> allows applications to<br />
exchange data between heterogeneous systems.<br />
OpenView products using HTTPS communication can easily<br />
communicate with each other, as well as with other industry-st<strong>and</strong>ard<br />
products. It is also now easier to create new products that can<br />
communicate with existing products on your network <strong>and</strong> easily<br />
integrate with your firewalls <strong>and</strong> HTTP-proxies.<br />
HTTPS communication provides the following major advantages:<br />
<strong>Firewall</strong> Friendly<br />
Secure<br />
Open<br />
Scalable<br />
DCE/UDP Communication<br />
Since UDP does not do any transmission control, communication packets<br />
can be lost on the network. DCE RPC’s, based on UDP, implement their<br />
own transmission control on a higher level of the communication stack.<br />
Therefore no communication can be lost.<br />
Since UDP is not connection based, everything is cleaned up immediately<br />
after the communication is complete. This makes it the preferred choice<br />
for all nodes where the following applies:<br />
❏ The node is located inside the firewall. See “DCE/UDP<br />
Communication Type” on page 82 for more information.<br />
❏ The node is connected on a good LAN connection where few packets<br />
are lost.<br />
DCE/TCP Communication<br />
TCP is a connection-oriented protocol. The protocol will detect if packets<br />
are dropped on the network <strong>and</strong> re-send only those packets. This makes<br />
it the choice for all bad networks.<br />
Since TCP is connection oriented, it keeps open a connection for a period<br />
after communication is finished. This is to avoid having to reopen a new<br />
connection if other communication is requested later. This can cause<br />
Chapter 1 33
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
OVO Communication <strong>Concepts</strong><br />
34<br />
problems in environments where communication is to multiple different<br />
targets, for example, OVO, because resources stay locked for a while. So,<br />
wherever possible, switch the node connection type to UDP.<br />
Microsoft RPC Communication<br />
Microsoft RPC’s are mostly compatible to DCE RPC’s. Therefore the<br />
notes on UDP <strong>and</strong> TCP apply.<br />
For specific problems caused by Microsoft’s implementation see<br />
“Windows Managed Nodes” on page 80.<br />
Sun RPC Communication<br />
For Novell NetWare, the Sun RPC is used for communication. It can use<br />
UDP or TCP.<br />
For specific problems caused on the implementation see “Sun RPC<br />
Communication Type” on page 83.<br />
NCS Communication<br />
NCS is a UDP based protocol implementing the transmission control on<br />
a higher level. For nodes only supporting NCS, there is no choice between<br />
UDP <strong>and</strong> TCP. See “NCS Communication Type” on page 83.<br />
Chapter 1
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Configuring OVO for <strong>Firewall</strong> Environments<br />
Configuring OVO for <strong>Firewall</strong> Environments<br />
A firewall is a router system that filters all communication between two<br />
subnets. Only packets that pass at least one filter rule are allowed to<br />
pass the firewall. All other packets are discarded.<br />
A filter rule usually consists of the protocol type (for example TCP, UDP<br />
<strong>and</strong> ICMP), a direction (inside->outside or outside->inside), a source port<br />
<strong>and</strong> a destination port. Instead of one port, a port range can be specified.<br />
Figure 1-3 Example of a firewall configuration<br />
Source Port<br />
12001<br />
14001<br />
TCP<br />
<strong>Firewall</strong><br />
Destination Port<br />
RPC Client<br />
15.136.120.193<br />
Rules:<br />
allow TCP<br />
from 15.136.120.193 port 12001<br />
RPC Server<br />
192.168.1.2<br />
to 192.168.1.2 port 135<br />
NOTE The default configuration for communication over a firewall are<br />
described first. Special cases are described in subsequent chapters.<br />
Chapter 1 35<br />
135<br />
13001
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
36<br />
Special <strong>Configuration</strong>s<br />
Motif User Interface<br />
It is advised that the Motif GUI is not directed through the firewall; this<br />
can cause security exposure problems. If this is required refer to your<br />
st<strong>and</strong>ard instructions on redirecting the X-Windows system through a<br />
firewall.<br />
OVO Java User Interface<br />
After the installation, the Java GUI requires only one TCP connection for<br />
the runtime. This port number can be configured. The default port<br />
number is 2531 for the Java GUI <strong>and</strong> 35211 for the Secure Java GUI.<br />
The firewall must be configured to allow the Java GUI access to the<br />
management server ports listed in Table 1-4.<br />
Table 1-4 Filter Rule for the OVO Java GUI<br />
Source Destination Protocol<br />
Source<br />
Port<br />
JAVA GUI MGMT SRV TCP any 2531<br />
Secure JAVA GUI MGMT SRV TCP any 35211<br />
Configuring Ports for the Java GUI or Secure Java GUI<br />
Note that the port settings on the management server <strong>and</strong> the Java GUI<br />
client (or Secure Java GUI client) must be identical.<br />
1. Configuring the Port on the Management Server:<br />
a. In the file /etc/services, locate the following line:<br />
Destination<br />
Port<br />
Java GUI<br />
ito-e-gui 2531/tcp # ITO Enterprise Java GUI-e-gui<br />
Secure Java GUI<br />
ito-e-gui-sec 35211/tcp # ITO Enterprise Secure<br />
Java GUI<br />
Chapter 1
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
b. Change the port number 2531 or 35211 to the port number you<br />
wish to use.<br />
c. Restart the inet.d service:<br />
/usr/sbin/inetd –c<br />
2. Configuring the Port on the Java GUI/Secure Java GUI<br />
Client:<br />
Edit the GUI startup script ito_op (UNIX) or ito_op.bat<br />
(Windows) <strong>and</strong> add the following line:<br />
port=<br />
Where is the port number you wish to use.<br />
Chapter 1 37
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
38<br />
Message Forwarding<br />
It is strongly recommend not to have OVO management servers outside a<br />
firewall to the Internet.<br />
However, in many environments there are firewalls within a company<br />
causing multiple management servers with message forwarding to be set<br />
up. Figure 1-4 illustrates the Internet firewall for a message forwarding<br />
scheme.<br />
Figure 1-4 Internet <strong>Firewall</strong> for Message Forwarding<br />
Agent<br />
Server 1<br />
Agent<br />
<strong>Firewall</strong><br />
Agent<br />
Server 2<br />
Agent<br />
Chapter 1
Communication <strong>Concepts</strong> in Message Forwarding<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
Figure 1-5 on page 39 illustrates the communication model between two<br />
OVO management servers.<br />
In Figure 1-5, the RPC daemon (rpcd) represents the endpoint mapper.<br />
The Message Receiver (opcmsgrd) receives incoming messages <strong>and</strong><br />
action responses from agents <strong>and</strong> other management servers. The<br />
Forward <strong>Manager</strong> (opcforwm) forwards messages to other management<br />
servers.<br />
Figure 1-5 Communication Between Two OVO Management Servers<br />
opcforwm<br />
opcmsgrd<br />
Endpoint Mapper<br />
RPC Server<br />
RPC Client<br />
<strong>Firewall</strong><br />
rpcd<br />
opcmsgrd<br />
rpcd opcforwm<br />
Chapter 1 39
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
40<br />
Configuring Message Forwarding in <strong>Firewall</strong> Environments<br />
When configuring message forwarding between OVO management<br />
servers, each management server must be configured in the other’s node<br />
bank. The communication type should be configured to use DCE/TCP.<br />
The firewall must be configured agains the rules specified in Table 1-5.<br />
NOTE Message forwarding between OVO management servers is based on DCE<br />
communication.<br />
Table 1-5 DCE/TCP Filter Rules for Multiple Management Servers<br />
Source Destination Protocol Source Port Destination<br />
port<br />
Description<br />
SERVER 1 SERVER 2 TCP 12004-12005 135 Endpoint<br />
map<br />
SERVER 2 SERVER 1 TCP 12004-12005 135 Endpoint<br />
map<br />
SERVER 1 SERVER 2 TCP 12004-12005 12001 Message<br />
Receiver<br />
SERVER 2 SERVER 1 TCP 12004-12005 12001 Message<br />
Receiver<br />
These rules allow only the forwarding of messages <strong>and</strong> the<br />
synchronization of the two management servers. As soon as actions are<br />
executed on an agent system on the other side of the firewall, the agent<br />
rules must be applied to the firewall as described inChapter 4,<br />
“Configuring DCE Nodes,” on page 67.<br />
Chapter 1
VP390/VP400 in <strong>Firewall</strong> Environments<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
VP390 <strong>and</strong> VP400 consists of an agent component that runs on the<br />
mainframe <strong>and</strong> a server component that h<strong>and</strong>les all communication<br />
between agent <strong>and</strong> server as shown in Figure 1-6.<br />
The agent TCP subtask requests the opening of two TPC/IP ports on the<br />
mainframe, then waits for the VP390/VP400 server component to start<br />
communication through these ports. All connections between these<br />
components are initiated from the server but data may be sent from the<br />
agent to the server (in the same way as messages being sent to the<br />
message server). The communication is based on TCP sockets.<br />
Figure 1-6 <strong>Firewall</strong> for VP390(VP400)<br />
Agent<br />
TCP subtask<br />
6106 (8000)<br />
Connect<br />
Data<br />
6107 (8001)<br />
Connect + Data<br />
<strong>Firewall</strong><br />
Master Message<br />
Server<br />
Comm<strong>and</strong><br />
Server<br />
Chapter 1 41
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Special <strong>Configuration</strong>s<br />
42<br />
The two connection ports by default are 6106 (for messages) <strong>and</strong> 6107<br />
(for comm<strong>and</strong>s). To change the defaults follow the instructions below:<br />
❏ Changing the Default Ports on the Managed Node<br />
The ports are defined in the VPOPARM member of the SAMP dataset<br />
which is loaded into the mainframe at installation time. To change<br />
the ports, edit the SAMP(VPOPARM) member. The comments in<br />
VPOPARM refer to these two ports as the MMSPORT <strong>and</strong> the CMDPORT.<br />
To edit defaults on the agent, enter the text below:<br />
VP390 TCP 6106 6107<br />
VP400 TCP 8000 8000<br />
Refer to <strong>HP</strong> OpenView <strong>Operations</strong> OS/390 Management <strong>Concepts</strong><br />
Guide for additional information.<br />
❏ Changing the Default Ports on the Management Server<br />
VP390 Specify the new values in the<br />
EVOMF_HCI_AGENT_PORT <strong>and</strong><br />
EVOMF_CMDS_AGENT_PORT values.<br />
VP400 Specify the new values in the<br />
EV400_AS400_MSG_PORT <strong>and</strong><br />
EV400_AS400_CMD_PORT values.<br />
The firewall must be configured for the ports specified in Table 1-6.<br />
Table 1-6 Filter Rules for VP390<br />
Source Destination Protocol Source<br />
Port<br />
Destination<br />
port<br />
Description<br />
SERVER VP390 TCP any 6106 Messages<br />
SERVER VP390 TCP any 6107 Comm<strong>and</strong>s<br />
SERVER VP400 TCP any 8000 Messages<br />
SERVER VP400 TCP any 8001 Comm<strong>and</strong>s<br />
Chapter 1
Network Address Translation<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Network Address Translation<br />
Network address translation (NAT) is often used with firewall systems in<br />
combination with the port restrictions. It translates IP addresses that<br />
are sent over the firewall.<br />
Network address translation can be used to achieve the following:<br />
Hide the complete IP range of one side of the firewall from the other<br />
side.<br />
Use an internal IP range that cannot be used on the Internet, so it<br />
must be translated to a range that is available there.<br />
NAT can be set up to translate only the IP addresses of one side of the<br />
firewall or to translate all addresses as shown in Figure 1-7.<br />
Figure 1-7 <strong>Firewall</strong> using NAT<br />
Source IP Target IP<br />
10.136.120.193<br />
Data<br />
192.168.1.3<br />
Source IP Target IP<br />
192.168.1.3<br />
Data<br />
10.136.120.193<br />
<strong>Firewall</strong><br />
192.168.1.3 53.168.1.3<br />
Source IP Target IP<br />
10.136.120.193<br />
Data<br />
53.168.1.3<br />
Source IP Target IP<br />
53.168.1.3<br />
Data<br />
10.136.120.193<br />
Chapter 1 43
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Network Address Translation<br />
44<br />
Address Translation of Duplicate Identical IP Ranges<br />
Figure 1-8 illustrates Address Translation firewall for duplicate identical<br />
IP ranges.<br />
Figure 1-8 Address Translation <strong>Firewall</strong> for Duplicate Identical IP Ranges<br />
inside<br />
NAT <strong>Firewall</strong> 1 NAT <strong>Firewall</strong> 2<br />
outside<br />
Internet Service Provider<br />
inside<br />
outside<br />
Customer 1 Customer 2<br />
10.x.x.x 10.x.x.x<br />
This scenario often happens for Internet Service Providers (ISPs). They<br />
have multiple customers using the same IP range internally. To manage<br />
all customers, they set up an Address Translation firewall for each. After<br />
the translation the systems at all customers have unique IP addresses on<br />
the ISP’s network.<br />
OVO can h<strong>and</strong>le this scenario by using the unique IP address for all<br />
communication. This means that the OVO agent on a customer’s system<br />
uses the IP address as known on the ISP side of the firewall for the OVO<br />
internal communication.<br />
Chapter 1
Known Issues in NAT Environments<br />
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Network Address Translation<br />
In a NAT environment, the following problems can be encountered.<br />
FTP Does Not Work<br />
Problem<br />
There is a general problem with FTP in a NAT environment. This will<br />
cause the OVO agent installation mechanism to fail. The following<br />
scenarios might occur:<br />
❏ The Installation to Microsoft Windows nodes just hangs for a while<br />
after entering the Administrator’s password.<br />
❏ The UNIX installation reports that the node does not belong to the<br />
configured operating system version.<br />
This issue can be verified by manually trying FTP from the OVO<br />
management server to an agent outside the firewall. The FTP login will<br />
succeed but at the first data transfer (GET, PUT, DIR), FTP will fail.<br />
Possible error messages are:<br />
500 Illegal PORT Comm<strong>and</strong><br />
425 Can’t build data connection: Connection refused.<br />
500 You’ve GOT to be joking.<br />
425 Can’t build data connection: Connection refused.<br />
200 PORT comm<strong>and</strong> successful.<br />
hangs for about a minute before reporting<br />
425 Can’t build data connection: Connection timed out.<br />
Usually, FTP involves opening a connection to an FTP server <strong>and</strong> then<br />
accepts a connection from the server back to the client on a<br />
r<strong>and</strong>omly-chosen, high-numbered TCP port. The connection from the<br />
client is called the control connection, <strong>and</strong> the one from the server is<br />
known as the data connection. All comm<strong>and</strong>s <strong>and</strong> the server’s responses<br />
go over the control connection, but any data sent back, such as directory<br />
lists or actual file data in either direction, go over the data connection.<br />
Some FTP clients <strong>and</strong> servers implement a different method known as<br />
passive FTP to retrieve files from an FTP site. This means that the client<br />
opens the control connection to the server, tells the FTP server to expect<br />
a second connection <strong>and</strong> then opens the data connection to the server<br />
itself on a r<strong>and</strong>omly-chosen, high-numbered port.<br />
Chapter 1 45
<strong>Firewall</strong> <strong>Configuration</strong> in OVO<br />
Network Address Translation<br />
46<br />
Solution<br />
The <strong>HP</strong>-UX FTP client does not support passive FTP. As a result, for<br />
OVO, installation using FTP cannot be used. Manually install the agent<br />
on the managed node system. Use the SSH installation method, provided<br />
that SSH can cross the firewall.<br />
Chapter 1
2 Advanced <strong>Configuration</strong><br />
Chapter 2 47
Advanced <strong>Configuration</strong><br />
Special <strong>Configuration</strong>s<br />
48<br />
Special <strong>Configuration</strong>s<br />
ICMP (DCE Agents Only)<br />
Since ICMP packages are usually blocked over the firewall, there is a<br />
trigger for the agent to disable any ICMP requests to the server. To<br />
enable that special functionality, add the following line to the opcinfo<br />
file <strong>and</strong> restart the agents:<br />
OPC_RPC_ONLY TRUE<br />
DNS<br />
If DNS queries are blocked over the firewall, local name resolution has to<br />
be set up so that the agent can resolve its own <strong>and</strong> the OVO management<br />
server’s name.<br />
SNMP Queries<br />
If SNMP queries are blocked over the firewall, no automatic<br />
determination of the node type when setting up a node is possible. For all<br />
nodes outside the firewall, the correct node type has to be selected<br />
manually.<br />
If SNMP queries are wanted over the firewall, the following ports have to<br />
be opened up as displayed in Table 2-1:<br />
Table 2-1 Filter Rules for SNMP Queries<br />
Source Destination Protocol Source Port Destination Port Description<br />
MGMT SRV MGD NODE UDP any 161 SNMP<br />
MGD NODE MGMT SRV UDP 161 any SNMP<br />
Chapter 2
Advanced <strong>Configuration</strong><br />
Special <strong>Configuration</strong>s<br />
OVO Agent Installation in <strong>Firewall</strong> Environments<br />
In most firewall environments, the agents will be installed manually <strong>and</strong><br />
will not use the automatic OVO agent installation mechanism. If the<br />
automatic agent installation is required for the firewall, the following<br />
ports need to be opened:<br />
❏ Windows: Table 2-2 on page 49<br />
❏ UNIX: Table 2-3 on page 50<br />
Table 2-2 Filter Rules for Windows Agent Installation<br />
Source Destination Protocol<br />
MGMT SRV NT NODE ICMP echo<br />
request<br />
NT NODE MGMT SRV ICMP echo<br />
request<br />
Source<br />
Port<br />
Destination<br />
Port<br />
n/a n/a ICMP<br />
n/a n/a ICMP<br />
MGMT SRV NT NODE TCP any 21 FTP<br />
Description<br />
NT NODE MGMT SRV TCP 20 any FTP-Data<br />
The installation of Windows managed nodes might fail <strong>and</strong> report the<br />
following message:<br />
E-> Connection error to management server<br />
hpbblcsm.bbn.hp.com.<br />
E-> Error from Inform<strong>Manager</strong>.<br />
E-> Setup program aborted.<br />
If this occurs, it is related to the firewall blocking that communication.<br />
As a workaround, install the agents manually as described in the OVO<br />
Administrator’s Reference. In general, you will need to execute the<br />
opc_pre.bat script instead of the opc_inst.bat script. In addition,<br />
execute the following comm<strong>and</strong>s on the management server:<br />
opcsw -installed <br />
opchbp -start <br />
Chapter 2 49
Advanced <strong>Configuration</strong><br />
Special <strong>Configuration</strong>s<br />
50<br />
Table 2-3 specifies filter rules for UNIX managed nodes.<br />
Table 2-3 Filter Rules for UNIX Agent Installation<br />
Source Destination Protocol Source Port<br />
MGMT SRV UX NODE ICMP echo<br />
request<br />
UX NODE MGMT SRV ICMP echo<br />
request<br />
Destination<br />
Port<br />
n/a n/a ICMP<br />
n/a n/a ICMP<br />
MGMT SRV UX NODE TCP any 21 FTP<br />
Description<br />
UX NODE MGMT SRV TCP 20 any FTP-Data<br />
MGMT SRV UX NODE TCP any 512 Exec<br />
MGMT SRV UX NODE TCP any 22 Exec<br />
File Transfer<br />
NOTE The installation of UNIX managed nodes will run into a timeout of about<br />
one minute when checking the password. This can only be avoided by<br />
completely opening the firewall.<br />
Chapter 2
3 Configuring HTTPS Nodes<br />
Chapter 3 51
Configuring HTTPS Nodes<br />
52<br />
This chapter describes how to setup <strong>and</strong> configure OVO HTTPS managed<br />
nodes in a firewall environment. It describes what steps need to be<br />
performed on the OVO management server <strong>and</strong> on the firewall to allow<br />
communication to an agent outside of the firewall.<br />
Chapter 3
Specifying Client Port Ranges<br />
Configuring HTTPS Nodes<br />
Specifying Client Port Ranges<br />
To specify client port ranges for HTTPS nodes on the OVO management<br />
server, use the following comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns bbc.http -set CLIENT_PORT <br />
This comm<strong>and</strong> set the specified port range for all HTTPS nodes managed<br />
by the OVO management server from which the comm<strong>and</strong> is called.<br />
A port range can be set for a specific processes. For example, to set a port<br />
range for the request sender, ovoareqsdr use the following comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns bbc.http.ext.ovoareqsdr -set \<br />
CLIENT_PORT <br />
Chapter 3 53
Configuring HTTPS Nodes<br />
Management Server <strong>and</strong> Managed Node Port Settings<br />
54<br />
Management Server <strong>and</strong> Managed Node Port<br />
Settings<br />
For both, OVO management server <strong>and</strong> OVO managed node, a set of<br />
ports can be defined. The following settings are used, as an example,<br />
within this documentchapter. The settings can be changed to reflect the<br />
your environment.<br />
Table 3-1 specifies the management server communication ports.<br />
Table 3-1 Management Server Communication Port Settings<br />
Server Type Communication Type Port Range<br />
ovbbccb HTTPS Server 383<br />
Remote Agent Tool HTTPS Client ANY, Configurable<br />
Request Sender HTTPS Client ANY, Configurable<br />
opcbbcdist HTTPS Client ANY, Configurable<br />
ovcs HTTPS Client ANY, Configurable<br />
Table 3-2 specifies the managed node communication ports.<br />
Table 3-2 Managed Node Communication Port Settings<br />
Agent Type Communication Type Port Range<br />
ovbbccb HTTPS Server 383<br />
Message Agent HTTPS Client ANY, Configurable<br />
Table 3-3 specifies the console communication ports.<br />
Table 3-3 Console Communication Port Settings<br />
Agent Type Communication Type Port Range<br />
Reporter (3.5) HTTPS Client ANY, Configurable<br />
Performance <strong>Manager</strong><br />
(4.0.5)<br />
HTTPS Client ANY, Configurable<br />
Chapter 3
Configuring HTTPS Nodes<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes without a Proxy<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes<br />
without a Proxy<br />
For the runtime of the OVO agent, the firewall requires a specific range<br />
of communication ports to be opened. This allows the use of normal agent<br />
functionality. For details on the agent installation, see “OVO Agent<br />
Installation in <strong>Firewall</strong> Environments” on page 49.<br />
Table 3-4 specifies the filter rules for runtime of HTTPS managed nodes.<br />
Figure 3-1 <strong>Firewall</strong> for HTTPS Nodes without a Proxy<br />
<strong>Firewall</strong><br />
Server Domain External Domain<br />
svr.dom ext.dom<br />
Server Agent 1<br />
Agent 2<br />
Agent n<br />
Table 3-4 Filter Rules for Runtime of HTTPS Managed Nodes without<br />
Proxies<br />
Source Destination Protocol Source Port<br />
MGMT SRV HTTPS NODE TCP ANY, Configurable a 383<br />
HTTPS NODE MGMT SRV TCP ANY, Configurable a 383<br />
a. Default allocated by system. Specific port can be configured.<br />
Destination<br />
Port<br />
Chapter 3 55
Configuring HTTPS Nodes<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes with Proxies<br />
56<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes with<br />
Proxies<br />
For the runtime of the OVO agent with HTTP proxy, the firewall requires<br />
a specific range of communication ports to be opened. This allows the use<br />
of normal agent functionality. For details on the agent installation, see<br />
“OVO Agent Installation in <strong>Firewall</strong> Environments” on page 49.<br />
Figure 3-2 <strong>Firewall</strong> for HTTPS Nodes with an External Proxy<br />
Server<br />
<strong>Firewall</strong><br />
Server Domain External Domain<br />
svr.dom ext.dom<br />
❶<br />
Proxy<br />
❶ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(*.ext.dom)<br />
❷ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(s.svr.dom)<br />
Agent 1<br />
Agent 2<br />
Agent n<br />
Table 3-5 Filter Rules for Runtime of HTTPS Managed Nodes with an<br />
External Proxy<br />
Source Destination Protocol Source Port<br />
❷<br />
❷<br />
❷<br />
Destination<br />
Port<br />
MGMT SRV Proxy TCP ANY, Configurable a Proxy port<br />
Proxy MGMT SRV TCP PROXY, dependent on<br />
software<br />
383<br />
Chapter 3
Configuring HTTPS Nodes<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes with Proxies<br />
Proxies can be configured using the comm<strong>and</strong>:<br />
ovconfchg -ns bbc.http -set PROXY<br />
Table 3-4 specifies the filter rules for runtime of HTTPS managed nodes.<br />
Figure 3-3 <strong>Firewall</strong> for HTTPS Nodes with an Internal Proxy<br />
Server<br />
❶<br />
Proxy<br />
<strong>Firewall</strong><br />
Server Domain External Domain<br />
svr.dom ext.dom<br />
❶ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(*.ext.dom)<br />
❷ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(s.svr.dom)<br />
Agent 1<br />
Agent 2<br />
Agent n<br />
Table 3-6 Filter Rules for Runtime of HTTPS Managed Nodes with an<br />
Internal Proxy<br />
Source Destination Protocol Source Port<br />
Proxy HTTPS NODE TCP PROXY, dependent on software 383<br />
HTTPS<br />
NODE<br />
Chapter 3 57<br />
❷<br />
❷<br />
Destination<br />
Port<br />
Proxy TCP ANY, Configurable a Proxy port<br />
❷
Configuring HTTPS Nodes<br />
Configuring a <strong>Firewall</strong> for HTTPS Nodes with Proxies<br />
Figure 3-4 <strong>Firewall</strong> for HTTPS Nodes with Internal <strong>and</strong> External Proxies<br />
58<br />
Server<br />
<strong>Firewall</strong><br />
Server Domain External Domain<br />
❷<br />
svr.dom ext.dom<br />
❶<br />
Proxy<br />
Proxy<br />
❶ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(*.ext.dom)<br />
❷ ovconfchg -ns bbc.http -set PROXY p.ext.dom:pport+(s.svr.dom)<br />
Agent 1<br />
Agent 2<br />
Agent n<br />
Table 3-7 Filter Rules for Runtime of HTTPS Managed Nodes with Internal<br />
<strong>and</strong> External Proxies<br />
Source Destination Protocol Source Port Destination Port<br />
Proxy Internal Proxy External TCP PROXY internal ,<br />
dependent on software<br />
Proxy External Proxy Internal TCP PROXY srv.domain,<br />
dependent on software<br />
❷<br />
❷<br />
PROXY external ,<br />
dependent on software<br />
PROXY internal ,<br />
dependent on software<br />
Chapter 3
Configuring HTTPS Nodes<br />
Configuring the OVO Management Server<br />
Configuring the OVO Management Server<br />
To configure the management server, complete the following steps:<br />
1. Configure Management Server (ovbbccb) Port<br />
Enter the comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns bbc.cb.ports \<br />
-set SERVER_PORT <br />
2. Configure the Client Port Range<br />
Enter the following comm<strong>and</strong>s:<br />
ovconfchg -ns bbc.cb.ports \<br />
-set CLIENT_PORT <br />
3. Restart the management server processes.<br />
a. ovstop ovctrl ovoacomm<br />
b. ovstart opc<br />
4. Optional: Improve network performance.<br />
Check if the network parameters for the system require tuning.<br />
Refer to “Network Tuning for <strong>HP</strong>-UX 11.x” on page 164 or to<br />
“Network Tuning for <strong>HP</strong>-UX 10.20” on page 163.<br />
Chapter 3 59
Configuring HTTPS Nodes<br />
Configuring OVO Managed Nodes<br />
60<br />
Configuring OVO Managed Nodes<br />
The communication type for each node has to be set in the OVO Node<br />
Bank on the management server. After distribution of the new<br />
configuration data, the agent processes have to be restarted manually.<br />
1. Configure the Server Port<br />
Enter the comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns bbc.cb.ports \<br />
-set SERVER_PORT <br />
2. Configure the Client Port Range<br />
Enter the comm<strong>and</strong>:<br />
ovconfchg -ns bbc.cb.ports \<br />
-set CLIENT_PORT <br />
3. Restart the Agent Processes on the Managed Node.<br />
Restart the agent processes for the new settings to take effect:<br />
opcagt -kill<br />
opcagt -start<br />
Chapter 3
Systems with Multiple IP Addresses<br />
Configuring HTTPS Nodes<br />
Systems with Multiple IP Addresses<br />
If your environment includes systems with multiple network interfaces<br />
<strong>and</strong> IP addresses <strong>and</strong> you want to use a dedicated interface for the<br />
HTTP-based communication, set the following variables:<br />
❏ CLIENT_BIND_ADDR<br />
ovconfchg -ns bbc.http -set CLIENT_BIND_ADDR <br />
See “CLIENT_BIND_ADDR” on page 174 for more information.<br />
For the specific processes, such as for the OVO Message Agent, use<br />
the comm<strong>and</strong>:<br />
ovconfchg -ns bbc.http.ext.opcmsga -set \<br />
CLIENT_BIND_ADDR <br />
❏ SERVER_BIND_ADDR<br />
ovconfchg -ns bbc.http -set SERVER_BIND_ADDR <br />
See “SERVER_BIND_ADDR” on page 173 for more information.<br />
This comm<strong>and</strong> applies to the Communication Broker (ovbbccb) <strong>and</strong><br />
all other HTTPS RPC servers visible on the network. Since for OVO<br />
8.x, only the Communication Broker is normally visible on the<br />
network, all other RPC servers are connected through the<br />
Communication Broker <strong>and</strong> are not effected by SERVER_BIND_ADDR<br />
setting.<br />
Chapter 3 61
Configuring HTTPS Nodes<br />
HTTPS Agents <strong>and</strong> Network Address Translation<br />
62<br />
HTTPS Agents <strong>and</strong> Network Address<br />
Translation<br />
Address Translation of Outside Addresses<br />
This is the basic scenario for NAT. Only the outside addresses are<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 3-5.<br />
Figure 3-5 <strong>Firewall</strong> Using NAT<br />
NAT <strong>Firewall</strong><br />
10.136.120.193 inside outside<br />
53.168.1.3<br />
Server<br />
Agent<br />
bear.int.mycom.com lion.ext.mycom.com<br />
DB<br />
lion.ext.mycom.com 192.168.1.3<br />
192.168.1.3 53.168.1.3<br />
DNS<br />
OPC_IP_ADDRESS<br />
192.168.1.3<br />
bear.int.mycom.com 10.136.120.193<br />
Chapter 3
Configuring HTTPS Nodes<br />
HTTPS Agents <strong>and</strong> Network Address Translation<br />
NOTE If SSH works through the firewall, agent installation using the OVO<br />
Administrtator’s UI is possible. However, you must manually map the<br />
certificate request to the node <strong>and</strong> grant the request.<br />
Address Translation of Inside Addresses<br />
In this scenario, only the inside address (the management server) is<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 3-6.<br />
Figure 3-6 Network Address Translation for an Address Inside the <strong>Firewall</strong><br />
10.136.120.193<br />
Server<br />
bear.int.mycom.com<br />
DB<br />
lion.ext.mycom.com 192.168.1.3<br />
DNS<br />
bear.int.mycom.com 10.136.120.193<br />
<strong>Firewall</strong><br />
inside outside<br />
10.136.120.193 35.136.120.193<br />
OPC_IP_ADDRESS<br />
53.168.1.3<br />
Agent<br />
lion.ext.mycom.com<br />
DNS<br />
192.168.1.3<br />
bear.int.mycom.com 35.136.120.193<br />
Chapter 3 63
Configuring HTTPS Nodes<br />
HTTPS Agents <strong>and</strong> Network Address Translation<br />
64<br />
Address Translation of Inside <strong>and</strong> Outside Addresses<br />
This is the combination of the two previous scenarios. The inside <strong>and</strong> the<br />
outside network have a completely different set of IP addresses that get<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 3-7.<br />
Figure 3-7 Network Address Translation for Addresses Inside <strong>and</strong> Outside<br />
the <strong>Firewall</strong><br />
10.136.120.193<br />
Server<br />
bear.int.mycom.com<br />
DB<br />
lion.ext.mycom.com 192.168.1.3<br />
DNS<br />
bear.int.mycom.com 10.136.120.193<br />
<strong>Firewall</strong><br />
inside outside<br />
192.168.1.3 53.168.1.3<br />
10.136.120.193 35.136.120.193<br />
OPC_IP_ADDRESS<br />
53.168.1.3<br />
Agent<br />
lion.ext.mycom.com<br />
DNS<br />
192.168.1.3<br />
bear.int.mycom.com 35.136.120.193<br />
Chapter 3
Configuring HTTPS Nodes<br />
HTTPS Agents <strong>and</strong> Network Address Translation<br />
IP Masquerading or Port Address Translation<br />
IP Masquerading or Port Address Translation (PAT) is a form of Network<br />
Address Translation that allows systems that do not have registered<br />
Internet IP addresses to have the ability to communicate to the Internet<br />
via the firewall system’s single Internet IP address. All outgoing traffic<br />
gets mapped to the single IP address which is registered at the Internet.<br />
This can be used to simplify network administration.The administrator<br />
of the internal network can choose reserved IP addresses (for example, in<br />
the 10.x.x.x range, or the 192.168.x.x range). These addresses are not<br />
registered at the Internet <strong>and</strong> can only be used internally. This also<br />
alleviates the shortage of IP addresses that ISPs often experience. A site<br />
with hundreds of computers can get by with a smaller number of<br />
registered Internet IP addresses, without denying any of it’s users<br />
Internet access.<br />
The disadvantage of this method is that protocols that return<br />
connections collapse because there are multiple machines hiding behind<br />
that address; the firewall does not know where to route them.<br />
It is necessary to use “port forwarding” to reach the agents from inside<br />
the firewall. The proxy setting must be made as follows:<br />
ovconfchg -ovrg server -ns bbc.http -set \<br />
PROXY “10.136.120.254:Pfw + (*.ext.mycom.com)”<br />
Chapter 3 65
Configuring HTTPS Nodes<br />
HTTPS Agents <strong>and</strong> Network Address Translation<br />
66<br />
An example of IP Masquerading is shown in Figure 3-8.<br />
Figure 3-8 IP Masquerading or Port Address Translation<br />
10.136.120.193<br />
Server<br />
bear.int.mycom.com<br />
DB<br />
lion.ext.mycom.com 53.168.1.3<br />
tiger.ext.mycom.com 53.168.1.4<br />
P fw <strong>Firewall</strong> port<br />
P px Proxy port<br />
10.136.120.254<br />
NAT <strong>Firewall</strong> Masquerading<br />
10.136.120.254<br />
inside outside<br />
53.168.1.3<br />
53.168.1.4<br />
Port Forwarding<br />
OPC_IP_ADDRESS<br />
OPC_IP_ADDRESS<br />
53.168.1.100<br />
Proxy<br />
OPC_IP_ADDRESS<br />
53.168.1.3<br />
Agent<br />
lion.ext.mycom.com<br />
53.168.1.4<br />
Agent<br />
53.168.1.3<br />
tiger.ext.mycom.com<br />
puma.ext.mycom.com<br />
53.168.1.100<br />
53.168.1.4<br />
Chapter 3
4 Configuring DCE Nodes<br />
Chapter 4 67
Configuring DCE Nodes<br />
68<br />
This chapter describes how to setup <strong>and</strong> configure DCE managed nodes<br />
in a firewall environment. It describes what steps need to be performed<br />
on the OVO management server <strong>and</strong> on the firewall to allow<br />
communication to an agent outside of the firewall.<br />
Chapter 4
Configuring DCE Nodes<br />
Management Server <strong>and</strong> Managed Node Port Settings<br />
Management Server <strong>and</strong> Managed Node Port<br />
Settings<br />
For both, OVO management server <strong>and</strong> OVO managed node, a set of<br />
ports must be defined. The following settings are used, as an example,<br />
within this chapter. The settings can be changed to reflect the your<br />
environment.<br />
Table 4-1 specifies the management server communication ports.<br />
Table 4-1 Management Server Communication Port Settings<br />
Server Type Communication Type Port Range<br />
Communication <strong>Manager</strong> Socket Server 12003<br />
Display <strong>Manager</strong> RPC Server 12000<br />
Distribution <strong>Manager</strong> RPC Server 12002<br />
Forward <strong>Manager</strong> RPC Client 12004-12005<br />
Message Receiver RPC Server 12001<br />
NT Virtual Terminal Socket Server 12061<br />
Remote Agent Tool RPC Client 12041-12050<br />
Request Sender RPC Client 12006-12040<br />
TCP Socket Server Socket Server 12051-12060<br />
Chapter 4 69
Configuring DCE Nodes<br />
Management Server <strong>and</strong> Managed Node Port Settings<br />
70<br />
Table 4-2 specifies the managed node communication ports.<br />
Table 4-2 Managed Node Communication Port Settings<br />
Agent Type Communication Type Port Range<br />
Communication Agent Socket Server 13007<br />
Control Agent RPC Server 13001<br />
Distribution Agent RPC Client 13011-13013<br />
Embedded Performance<br />
Component<br />
Socket Server 13010<br />
Message Agent RPC Client 13004-13006<br />
NT Virtual Terminal Socket Client 13008-13009<br />
Table 4-3 specifies the console communication ports.<br />
Table 4-3 Console Communication Port Settings<br />
Agent Type Communication Type Port Range<br />
Reporter Socket Client 14000-14003<br />
Performance <strong>Manager</strong> Socket Client 14000-14003<br />
NOTE For details on the sizing of the RPC Client ranges on the OVO managed<br />
nodes <strong>and</strong> OVO management server, see “Port Usage on Managed Nodes”<br />
on page 191 <strong>and</strong> “Port Usage on the Management Server” on page 155.<br />
Chapter 4
Configuring DCE Nodes<br />
Management Server <strong>and</strong> Managed Node Port Settings<br />
In the configuration examples listed in this document, DCE/TCP is used<br />
as the communication type.<br />
For further details on:<br />
❏ DCE/UDP<br />
See “DCE/UDP Communication Type” on page 82.<br />
❏ Other Communication Types<br />
See “NCS Communication Type” on page 83 for NCS usage <strong>and</strong> “Sun<br />
RPC Communication Type” on page 83 for Sun RPC usage.<br />
❏ Supported Communication Types for Each Agent Platform<br />
See the VPO Installation Guide for the Management Server.<br />
Chapter 4 71
Configuring DCE Nodes<br />
Configuring a <strong>Firewall</strong> for DCE Nodes<br />
72<br />
Configuring a <strong>Firewall</strong> for DCE Nodes<br />
For the runtime of the OVO agent, the firewall requires a specific range<br />
of communication ports to be opened. This allows the use of normal agent<br />
functionality. For details on the agent installation, see “OVO Agent<br />
Installation in <strong>Firewall</strong> Environments” on page 49.<br />
Table 4-4 specifies the filter rules for runtime of DCE managed nodes.<br />
Table 4-4 Filter Rules for Runtime of DCE Managed Nodes<br />
Source Destination Protocol Source Port<br />
MGMT SRV DCE NODE TCP 12006-12040<br />
12041-12050<br />
DCE NODE MGMT SRV TCP 13011-13013<br />
13004-13006<br />
MGMT SRV DCE NODE TCP 12006-12040 13001<br />
Destination<br />
Port<br />
Description<br />
135 Endpoint map<br />
135 Endpoint map<br />
13007<br />
Control agent<br />
Communication<br />
agent<br />
MGMT SRV DCE NODE TCP 12041-12050 13001 Control agent<br />
DCE NODE MGMT SRV TCP 13011-13013 12002 Distribution<br />
manager<br />
DCE NODE MGMT SRV TCP 13004-13006 12001<br />
12003<br />
Message receiver<br />
Communication<br />
manager<br />
Chapter 4
Configuring DCE Nodes<br />
Configuring the OVO Management Server<br />
Configuring the OVO Management Server<br />
To configure the management server, you need to change the DCE client<br />
disconnect time. In addition, the port range has to be configured for each<br />
management server process.<br />
To configure the management server:<br />
1. Configure the DCE client disconnect time.<br />
Enter the following comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns opc -set\<br />
OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME 5<br />
See“OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME” on page 183 for further<br />
details.<br />
NOTE Set connections to 5 seconds to disconnect the connection for OVO’s<br />
management server processes. This setting is recommended to<br />
enable all the connections to different systems to be disconnected<br />
cleanly. Keeping the connections established for a lengthy period of<br />
time will block ports <strong>and</strong> there are only a few occasions when a<br />
connection could be re-used.<br />
2. Configure the port range for each management server<br />
process.<br />
Enter the following comm<strong>and</strong>s:<br />
ovconfchg -ovrg server -ns opc.opcdispm -set \<br />
OPC_COMM_PORT_RANGE 12000<br />
ovconfchg -ovrg server -ns opc.opcmsgrd -set \<br />
OPC_COMM_PORT_RANGE 12001<br />
ovconfchg -ovrg server -ns opc.opcdistm -set \<br />
OPC_COMM_PORT_RANGE 12002<br />
ovconfchg -ovrg server -ns opc.opccmm -set \<br />
OPC_COMM_PORT_RANGE 12003<br />
Chapter 4 73
Configuring DCE Nodes<br />
Configuring the OVO Management Server<br />
74<br />
ovconfchg -ovrg server -ns opc.opcforwm -set \<br />
OPC_COMM_PORT_RANGE 12004-12005<br />
ovconfchg -ovrg server -ns opc.ovoareqsdr -set \<br />
OPC_COMM_PORT_RANGE 12006-12040<br />
ovconfchg -ovrg server -ns opc.opcragt -set \<br />
OPC_COMM_PORT_RANGE 12041-12050<br />
ovconfchg -ovrg server -ns opc.opctss -set \<br />
OPC_COMM_PORT_RANGE 12051-12060<br />
ovconfchg -ovrg server -ns opc.opcvterm -set \<br />
OPC_COMM_PORT_RANGE 12061<br />
3. Restart the management server processes.<br />
a. ovstop ovctrl ovoacomm<br />
b. ovstart opc<br />
4. Optional: Improve network performance.<br />
Check if the network parameters for the system require tuning.<br />
Refer to “Network Tuning for <strong>HP</strong>-UX 11.x” on page 164 or to<br />
“Network Tuning for <strong>HP</strong>-UX 10.20” on page 163.<br />
Chapter 4
Configuring OVO Managed Nodes<br />
Configuring DCE Nodes<br />
Configuring OVO Managed Nodes<br />
The communication type for each node has to be set in the OVO Node<br />
Bank on the management server. After distribution of the new<br />
configuration data, the agent processes have to be restarted manually.<br />
1. Set the communication type for each managed node.<br />
a. In the OVO Node Bank, select the node that is located outside the<br />
firewall.<br />
b. Select Actions>Node>Modify… from the menu bar.<br />
c. Configure the heartbeat polling type for firewalls.<br />
Select RPC Only (for firewalls) as the Polling Type.<br />
d. Click on [Communication Options…] <strong>and</strong> select the following<br />
communication type settings in the Node Communication<br />
Options window: DCE RPC (TCP).<br />
e. Click [OK] in the Node Communication Options window <strong>and</strong><br />
the Modify Node window. The new configuration will be<br />
distributed to the managed node.<br />
NOTE If the distribution of the new configuration type is not<br />
automatic—the firewall may restrict the communication—add the<br />
following line to the nodeinfo file of the affected node:<br />
OPC_COMM_TYPE RPC_DCE_TCP<br />
The nodeinfo file is located in the following directory on the node:<br />
AIX: /var/lpp/OV/conf/OpC/nodeinfo<br />
UNIX: /var/opt/OV/conf/OpC/nodeinfo<br />
Windows: :\usr\OV\conf\OpC\nodeinfo<br />
Chapter 4 75
Configuring DCE Nodes<br />
Configuring OVO Managed Nodes<br />
76<br />
2. Add the flag for RPC distribution.<br />
a. Edit the file opcinfo on the managed nodes. The opcinfo file is<br />
located in the following directories on the node:<br />
AIX: /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX: /opt/OV/bin/OpC/install/opcinfo<br />
Windows: :\usr\OV\bin\OpC\install\opcinfo<br />
b. Add the following line at the end of the file:<br />
OPC_DIST_MODE DIST_RPC<br />
See “<strong>Configuration</strong> Distribution” on page 87 for more<br />
information.<br />
3. Configure the port range for each managed node process.<br />
a. Edit the file opcinfo on the managed nodes. The opcinfo file is<br />
located in the following directory on the node:<br />
AIX: /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX: /opt/OV/bin/OpC/install/opcinfo<br />
Windows: :\usr\OV\bin\OpC\install\opcinfo<br />
b. Add the following lines at the end of the file:<br />
OPC_RESTRICT_TO_PROCS opcctla<br />
OPC_COMM_PORT_RANGE 13001<br />
OPC_RESTRICT_TO_PROCS opcdista<br />
OPC_COMM_PORT_RANGE 13011-13013<br />
OPC_RESTRICT_TO_PROCS opcmsga<br />
OPC_COMM_PORT_RANGE 13004-13006<br />
OPC_RESTRICT_TO_PROCS opccma<br />
OPC_COMM_PORT_RANGE 13007<br />
c. On Windows systems only, add the following additional lines:<br />
OPC_RESTRICT_TO_PROCS opcvterm<br />
OPC_COMM_PORT_RANGE 13008-13009<br />
OPC_MAX_PORT_RETRIES 70<br />
Chapter 4
Configuring DCE Nodes<br />
Configuring OVO Managed Nodes<br />
d. Ensure that there are no additional lines following the<br />
OPC_RESTRICT_TO_PROCS comm<strong>and</strong> in the opcinfo file.<br />
Subsequent comm<strong>and</strong> lines will only apply to the process<br />
specified in the last OPC_RESTRICT_TO_PROCS comm<strong>and</strong> line; in<br />
the example shown opcvterm.<br />
4. Restart the agent processes on the managed node.<br />
Restart the agent processes for the new settings to take effect:<br />
opcagt -kill<br />
opcagt -start<br />
5. Configure the embedded performance component.<br />
See the section “Embedded Performance Component” on page 88 for<br />
more information about configuring the embedded performance <strong>and</strong><br />
any additional tools such as <strong>HP</strong> OpenView Reporter <strong>and</strong> <strong>HP</strong><br />
OpenView Performance <strong>Manager</strong>.<br />
NOTE The setting in the Administrator GUI (Node<br />
Bank>Actions>Server>Configure...>Allowed Port) can be set but<br />
the individual process settings will take precedence.<br />
Chapter 4 77
Configuring DCE Nodes<br />
Checking Communication Settings<br />
78<br />
Checking Communication Settings<br />
Verifying Communication Settings of the<br />
Management Server<br />
1. To confirm the server’s communication settings, execute the<br />
following comm<strong>and</strong> on the management server:<br />
rpccp show mapping<br />
2. A list of items similar to the following will be printed:<br />
ed0cd350-ecfd-11d2-9bd8-0060b0c41ede<br />
6d63f833-c0a0-0000-020f-887818000000,2.0<br />
ncacn_ip_tcp:10.136.120.193[12001]<br />
Message Receiver<br />
Verifying Communication Settings of Managed Nodes<br />
1. To confirm the agent’s communication settings, execute the following<br />
comm<strong>and</strong> on the managed node:<br />
rpccp show mapping<br />
2. A list of items similar to the following will be printed. The output for<br />
the Control Agent should show a registration for the given port<br />
range:<br />
nil<br />
9e0c0224-3654-0000-9a8d-08000949ab4c,2.0<br />
ncacn_ip_tcp:192.168.1.2[13001]<br />
Control Agent (COA)<br />
Register the Control Agent process to the port as defined in Table 4-4<br />
on page 72.<br />
Checking the Endpoint Map<br />
To check the endpoint map from a remote system, execute the following<br />
comm<strong>and</strong>:<br />
rpccp show mapping ncacn_ip_tcp:<br />
Chapter 4
Configuring DCE Nodes<br />
Checking Communication Settings<br />
NOTE Checking the endpoint map might be useful for systems where the rpccp<br />
tool does not exist. It is necessary to have a network connection to the<br />
remote system’s port 135.<br />
Chapter 4 79
Configuring DCE Nodes<br />
Windows Managed Nodes<br />
80<br />
Windows Managed Nodes<br />
The RPC implementation of MS Windows is only compatible to DCE <strong>and</strong><br />
does not implement the full DCE functionality. It is, therefore, not<br />
possible to restrict outgoing communication for RPC’s to a specific port<br />
range.<br />
Communicating with a Windows Managed Node<br />
Outside the <strong>Firewall</strong><br />
To communicate to a Windows node outside the firewall, open the<br />
firewall by following the filter rules as indicated Table 4-5 below:<br />
Table 4-5 Filter Rules for MS Windows Managed Nodes Runtime<br />
Source Destination Protocol<br />
Source<br />
Port<br />
MGMT SRV NT NODE TCP 12006-12040<br />
12041-12050<br />
Destination<br />
Port<br />
Description<br />
135 Endpoint map<br />
NT NODE MGMT SRV TCP any 135 Endpoint map<br />
MGMT SRV NT NODE TCP 12006-12040 13001<br />
13007<br />
Control Agent<br />
Communication<br />
Agent<br />
MGMT SRV NT NODE TCP 12041-12050 13001 Control Agent<br />
NT NODE MGMT SRV TCP any 12001<br />
12002<br />
12003<br />
Message Receiver<br />
Distribution<br />
<strong>Manager</strong><br />
Communication<br />
<strong>Manager</strong><br />
NT NODE MGMT SRV TCP 13008-13009 12061 NT Virtual Terminal<br />
Chapter 4
Configuring DCE Nodes<br />
Windows Managed Nodes<br />
NOTE Opening up the firewall like this does not cause a security issue because<br />
only the management server’s RPC Servers (Distribution <strong>Manager</strong>,<br />
Message Receiver <strong>and</strong> Communication <strong>Manager</strong>) can be accessed from<br />
the outside. The difference to a real DCE node is that a connection<br />
request is allowed from any possible source port.<br />
Chapter 4 81
Configuring DCE Nodes<br />
Communication Types<br />
82<br />
Communication Types<br />
DCE/UDP Communication Type<br />
DCE/UDP can not be completely restricted to a port range. Since all<br />
platforms where DCE is available also offer DCE/TCP, it is recommended<br />
that this is used.<br />
If there is a need to use DCE/UDP, the DCE daemon (rpcd/dced) can be<br />
forced to use a specific port range only. This is done by setting the<br />
RPC_RESTRICTED_PORTS variable before starting the daemon in addition<br />
to the setting for the server or agent processes.<br />
NOTE Restricting the DCE daemon’s port range will have an effect on all<br />
applications that use RPC communications on that system. They all will<br />
share the same port range.<br />
Chapter 4
NCS Communication Type<br />
Configuring DCE Nodes<br />
Communication Types<br />
Since NCS uses additional ports to answer connection requests, the<br />
firewall has to be opened up for more NCS nodes. Table 4-6 specifies the<br />
filter rules that must be followed.<br />
Table 4-6 Filter Rules for NCS Node Ports<br />
Source Destination Protocol<br />
Source<br />
Port<br />
MGMT SRV NCS NODE UDP 12006-12040<br />
12041-12050<br />
See “<strong>Configuration</strong> Distribution” on page 87 for notes on the distribution<br />
mechanism.<br />
Sun RPC Communication Type<br />
Destination<br />
Port<br />
Description<br />
135 Endpoint map<br />
NCS NODE MGMT SRV UDP any 135 Endpoint map<br />
MGMT SRV NCS NODE UDP 12006-12040<br />
12041-12050<br />
NCS NODE MGMT SRV UDP any 12001<br />
any Control Agent<br />
12002<br />
12003<br />
Communication<br />
Agent<br />
Message Receiver<br />
Distribution<br />
<strong>Manager</strong><br />
Communication<br />
<strong>Manager</strong><br />
For Novell NetWare managed nodes, the communication type Sun RPC is<br />
used. Since on Sun RPC no port restriction is possible, the firewall will<br />
need to be opened up completely for communication between the<br />
managed node <strong>and</strong> the management server. The communication type<br />
TCP or UDP can be selected in the OVO Node Bank. For Sun RPC, the<br />
endpoint mapper is located on port 111. In case UDP is selected, see<br />
“<strong>Configuration</strong> Distribution” on page 87.<br />
Chapter 4 83
Configuring DCE Nodes<br />
Communication Types<br />
NOTE It is not recommended to use Novell NetWare nodes in a firewall<br />
environment.<br />
84<br />
Chapter 4
Configuring DCE Nodes<br />
MC/ServiceGuard in <strong>Firewall</strong> Environments<br />
MC/ServiceGuard in <strong>Firewall</strong> Environments<br />
Since in an MC/Service Guard environment, the communication can use<br />
all available IP addresses of the cluster, the firewall has to be opened<br />
more.<br />
Table 4-7 specifies the filter rules that must be applied with DCE nodes.<br />
Table 4-7 Filter Rules for DCE Nodes <strong>and</strong> Management Server on<br />
MC/ServiceGuard<br />
Source Destination Protocol Source Port<br />
PACKAGE IP<br />
PHYS IP NODE 1<br />
PHYS IP NODE 2<br />
DCE NODE TCP 12006-12040<br />
12041-12050<br />
DCE NODE PACKAGE IP TCP 13011-13013<br />
PACKAGE IP<br />
PHYS IP NODE 1<br />
PHYS IP NODE 2<br />
PACKAGE IP<br />
PHYS IP NODE 1<br />
PHYS IP NODE 2<br />
13004-13006<br />
DCE NODE TCP 12006-12040 13001<br />
Destination<br />
Port<br />
Description<br />
135 Endpoint map<br />
135 Endpoint map<br />
13007<br />
Control Agent<br />
Communication<br />
Agent<br />
DCE NODE TCP 12041-12050 13001 Control Agent<br />
DCE NODE PACKAGE IP TCP 13011-13013 12002 Distribution<br />
<strong>Manager</strong><br />
DCE NODE PACKAGE IP TCP 13004-13006 12001<br />
12003<br />
Message<br />
Receiver<br />
Communication<br />
<strong>Manager</strong><br />
Chapter 4 85
Configuring DCE Nodes<br />
MC/ServiceGuard in <strong>Firewall</strong> Environments<br />
86<br />
For NCS nodes, the following special rules have to be applied as<br />
displayed in Table 4-8 on page 86:<br />
Table 4-8 Filter Rules for NCS Nodes <strong>and</strong> Management Server on<br />
MC/ServiceGuard<br />
Source Destination Protocol Source Port Destination<br />
Port<br />
PACKAGE IP<br />
PHYS IP NODE 1<br />
PHYS IP NODE 2<br />
NCS NODE UDP 12006-12040<br />
12041-12050<br />
Description<br />
135 Endpoint map<br />
NCS NODE PACKAGE IP UDP any 135 Endpoint map<br />
PACKAGE IP<br />
PHYS IP NODE 1<br />
PHYS IP NODE 2<br />
NCS NODE UDP 12006-12040<br />
12041-12050<br />
NCS NODE PACKAGE IP UDP any 12002<br />
any Control Agent<br />
12001<br />
12003<br />
Communication<br />
Agent<br />
Distribution<br />
<strong>Manager</strong><br />
Message<br />
Receiver<br />
Communication<br />
<strong>Manager</strong><br />
NOTE If there are additional cluster nodes or physical IP addresses, then the<br />
firewall must be opened.<br />
Chapter 4
<strong>Configuration</strong> Distribution<br />
Configuring DCE Nodes<br />
<strong>Configuration</strong> Distribution<br />
The OVO configuration distribution by default will use a TCP socket<br />
connection to send the actual data. This causes an additional TCP<br />
connection to be opened from the agent to the management server. Since<br />
this is not an RPC connection, it does not honor the setting of the<br />
RPC_RESTRICTED_PORTS environment variable.<br />
By setting the flag DIST_RPC in the opcinfo file on the managed nodes,<br />
the distribution data will be sent in an RPC call.<br />
Distributing the <strong>Configuration</strong> in an RPC Call<br />
1. Locate the opcinfo file on the managed nodes. The opcinfo file is<br />
located in the following directory on the managed node:<br />
AIX: /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX: /opt/OV/bin/OpC/install/opcinfo<br />
Windows: :\usr\OV\bin\OpC\install\opcinfo<br />
2. Add the following line to the opcinfo files:<br />
OPC_DIST_MODE DIST_RPC<br />
This will result in distribution data being sent in an RPC call.<br />
NOTE This might cause additional traffic in bad or slow network environments<br />
when UDP is used (NCS or DCE/UDP is configured as communication<br />
type).<br />
Chapter 4 87
Configuring DCE Nodes<br />
Embedded Performance Component<br />
88<br />
Embedded Performance Component<br />
Performance metrics are collected by the embedded performance<br />
component that is part of the OVO agents. The embedded performance<br />
component collects performance counter <strong>and</strong> instance data from the<br />
operating system.<br />
The collected values are stored in a proprietary persistent data store<br />
from which they are retrieved <strong>and</strong> transformed into presentation values.<br />
The presentation values can be used by extraction, visualization, <strong>and</strong><br />
analysis tools such as <strong>HP</strong> OpenView Reporter <strong>and</strong> <strong>HP</strong> OpenView<br />
Performance <strong>Manager</strong>.<br />
Figure 4-1 shows the communication with the embedded performance<br />
component through a firewall. The embedded performance component<br />
serves as HTTP server. Reporter <strong>and</strong> Performance <strong>Manager</strong> are HTTP<br />
clients.<br />
If an HTTP proxy is used, Reporter <strong>and</strong> Performance <strong>Manager</strong><br />
communicate with the embedded performance component via the proxy.<br />
Figure 4-1 Communication with the Embedded Performance Component<br />
Server<br />
<strong>HP</strong> OpenView<br />
Reporter<br />
HTTP<br />
Clients<br />
<strong>HP</strong> OpenView<br />
Performance<br />
<strong>Manager</strong><br />
Proxy<br />
<strong>Firewall</strong><br />
inside outside<br />
Agent<br />
HTTP<br />
Server<br />
Embedded<br />
Performance<br />
Component<br />
Chapter 4
Configuring DCE Nodes<br />
Embedded Performance Component<br />
Configuring Ports for the Embedded Performance<br />
Component<br />
Reporter <strong>and</strong> Performance <strong>Manager</strong> communicate with the embedded<br />
performance component via a protocol based on HTTP. To access data<br />
collected by the embedded performance component, ports for the HTTP<br />
server (embedded performance component) <strong>and</strong> the HTTP clients<br />
(Reporter <strong>and</strong>/or Performance <strong>Manager</strong>) need to be opened.<br />
There are two ways to configure HTTP clients in a firewall environment:<br />
❏ With HTTP proxy<br />
The recommended way is to use HTTP proxies when communicating<br />
through a firewall. This simplifies the configuration because proxies<br />
are often in use anyhow <strong>and</strong> the firewall has to be openend only for<br />
the proxy system <strong>and</strong> for a smaller number of ports.<br />
See Table 4-9, “Filter Rules for the Embedded Performance<br />
Component (With HTTP Proxy),” on page 90 for a list of default<br />
ports.<br />
❏ Without HTTP proxy<br />
If HTTP proxies are not available, additional ports have to be opened<br />
<strong>and</strong> additional configuration settings are required.<br />
See Table 4-10, “Filter Rules for the Embedded Performance<br />
Component (Without HTTP Proxy),” on page 90 for a list of default<br />
ports.<br />
NOTE The following sections require changing the configuration of the<br />
namespace opc.<br />
Chapter 4 89
Configuring DCE Nodes<br />
Embedded Performance Component<br />
90<br />
Table 4-9 specifies filter rules for the embedded performance component<br />
(with HTTP proxy).<br />
Table 4-9 Filter Rules for the Embedded Performance Component<br />
(With HTTP Proxy)<br />
Source Destination Protocol Source Port Destination<br />
Port<br />
PROXY MGD NODE HTTP Defined by<br />
the proxy.<br />
PROXY MGD NODE HTTP Defined by<br />
the proxy.<br />
Description<br />
383 Local Location<br />
Broker<br />
381 Embedded<br />
Performance<br />
Component<br />
Table 4-10 specifies filter rules for the embedded performance component<br />
(without HTTP proxy).<br />
Table 4-10 Filter Rules for the Embedded Performance Component<br />
(Without HTTP Proxy)<br />
Source Destination Protocol Source Port Destination<br />
Port<br />
Description<br />
REPORTER MGD NODE HTTP 14000-14003 383 Reporter -><br />
Local Location<br />
Broker request<br />
REPORTER MGD NODE HTTP 14000-14003 381 Embedded<br />
Performance<br />
Component<br />
PERFORMANCE<br />
MANAGER<br />
PERFORMANCE<br />
MANAGER<br />
MGD NODE HTTP 14000-14003 383 Performance<br />
<strong>Manager</strong> -><br />
Local Location<br />
Broker request<br />
MGD NODE HTTP 14000-14003 381 Embedded<br />
Performance<br />
Component<br />
Chapter 4
Configuring DCE Nodes<br />
Embedded Performance Component<br />
Configuring the Embedded Performance Component<br />
Use the nodeinfo parameter SERVER_PORT to set the ports used by the<br />
HTTP server (the embedded performance component):<br />
1. On the managed node where the embedded performance component<br />
is running, locate the nodeinfo file.<br />
2. Add the following line to the nodeinfo file:<br />
SERVER_PORT(com.hp.openview.Coda) <br />
Where is the number of the port you want to use, for<br />
example 13010.<br />
3. Restart the embedded performance process (coda):<br />
opcagt -stop -id 12<br />
opcagt -start -id 12<br />
Configuring Reporter <strong>and</strong>/or Performance <strong>Manager</strong><br />
There are two ways to configure the HTTP clients in a firewall<br />
environment:<br />
❏ With HTTP Proxy<br />
This is the recommended way. See the section “Configuring<br />
Reporter/Performance <strong>Manager</strong> With HTTP Proxy” on page 93.<br />
❏ Without HTTP Proxy<br />
See the section “Configuring Reporter/Performance <strong>Manager</strong><br />
Without HTTP Proxy” on page 94.<br />
If OVO agents are running on the system where Reporter <strong>and</strong>/or<br />
Performance <strong>Manager</strong> are installed, you have to use the nodeinfo file for<br />
your configuration. If no OVO agents are installed, you have to edit the<br />
communication configuration file default.txt.<br />
Chapter 4 91
Configuring DCE Nodes<br />
Embedded Performance Component<br />
See Table 4-11 on page 92 for the location of the default.txt file on<br />
different platforms.<br />
Table 4-11 Location of the default.txt File<br />
92<br />
Platform Location<br />
AIX /var/lpp/OV/conf/BBC/default.txt<br />
UNIX /var/opt/OV/conf/BBC/default.txt<br />
Windows :\usr\OV\conf\BBC\default.txt<br />
Chapter 4
Configuring DCE Nodes<br />
Embedded Performance Component<br />
Configuring Reporter/Performance <strong>Manager</strong> With<br />
HTTP Proxy<br />
When an HTTP proxy is used, Reporter <strong>and</strong>/or Performance <strong>Manager</strong><br />
have to be configured to know the proxy to be used to contact the<br />
embedded performance component.<br />
To configure Reporter <strong>and</strong>/or Performance <strong>Manager</strong> with HTTP proxies,<br />
do one of the following:<br />
❏ OVO Agents are Installed<br />
If OVO agents are installed on the system where Reporter <strong>and</strong>/or<br />
Performance <strong>Manager</strong> are running, add the variable PROXY to the<br />
nodeinfo file, for example:<br />
PROXY web-proxy:8088-(*.hp.com)+(*.mycom.com)<br />
In this example, the proxy web-proxy will be used with port 8088 for<br />
every server (*) except hosts that match *.hp.com, for example,<br />
www.hp.com. The exception is hostnames that match *.mycom.com.<br />
For example, for lion.mycom.com the proxy server will be used.<br />
See also “PROXY” on page 187.<br />
❏ OVO Agents are not Installed<br />
If no OVO agents are installed on the system where Reporter <strong>and</strong>/or<br />
Performance <strong>Manager</strong> are running, edit the communication<br />
configuration file default.txt. See Table 4-11 on page 92 for the<br />
location of the default.txt file.<br />
In the [DEFAULT] section of the default.txt file, locate the lines<br />
that relate to PROXY <strong>and</strong> set the PROXY parameter as described in the<br />
[DEFAULT] section.<br />
NOTE Any settings defined in the nodeinfo file will take precedence over<br />
the settings defined in the default.txt file.<br />
Chapter 4 93
Configuring DCE Nodes<br />
Embedded Performance Component<br />
94<br />
Configuring Reporter/Performance <strong>Manager</strong> Without<br />
HTTP Proxy<br />
NOTE If <strong>HP</strong> OpenView Reporter <strong>and</strong> <strong>HP</strong> OpenView Performance <strong>Manager</strong> are<br />
installed on the same system <strong>and</strong> both access the embedded performance<br />
component in parallel, specify a port range for CLIENT_PORT as described<br />
in this section. If they are running on different systems, you can instead<br />
specify a single port for each.<br />
To configure Reporter <strong>and</strong> Performance <strong>Manager</strong> without HTTP proxies:<br />
❏ OVO Agents are Installed<br />
If OVO agents are installed on the system where Reporter <strong>and</strong><br />
Performance <strong>Manager</strong> are running, set the client ports in the<br />
nodeinfo file, for example:<br />
CLIENT_PORT(com.hp.openview.CodaClient) = <br />
Where is the range of ports you want to use, for<br />
example 14000-14003.<br />
❏ OVO Agents are not Installed<br />
If no OVO agents are installed on the system where Reporter <strong>and</strong><br />
Performance <strong>Manager</strong> are running, edit the communication<br />
configuration file default.txt. See Table 4-11 on page 92 for the<br />
location of the default.txt file.<br />
In the default.txt file, locate the line<br />
[hp.com.openview.CodaClient] <strong>and</strong> specify the port range for the<br />
variable CLIENT_PORT right below this line. For example,<br />
[hp.com.openview.CodaClient]<br />
CLIENT_PORT = <br />
Where is the range of ports you want to use, for<br />
example 14000-14003.<br />
NOTE Any settings defined in the nodeinfo file will take precedence over<br />
the settings defined in the default.txt file.<br />
Chapter 4
Configuring DCE Nodes<br />
Embedded Performance Component<br />
Changing the Default Port of the Local Location<br />
Broker<br />
The default port of the Local Location Broker (LLB) is 383. If you decide<br />
to change this default value, the same value must be used on all systems,<br />
that is, the LLB SERVER_PORT variable must be set for the embedded<br />
performance component on all managed nodes as well as for Reporter<br />
<strong>and</strong> Performance <strong>Manager</strong>.<br />
To configure the LLB port, add the following variable to the nodeinfo (or<br />
default.txt) file:<br />
SERVER_PORT(com.hp.openview.bbc.LLBServer) <br />
Where is the number of the port you want to use. This<br />
number must be the same on all systems.<br />
Systems with Multiple IP Addresses<br />
If your environment includes systems with multiple network interfaces<br />
<strong>and</strong> IP addresses <strong>and</strong> you want to use a dedicated interface for the<br />
HTTP-based communication, set the following variables in the<br />
appropriate nodeinfo file (or default.txt file):<br />
❏ CLIENT_BIND_ADDR (for Reporter <strong>and</strong>/or Performance <strong>Manager</strong>)<br />
See “CLIENT_BIND_ADDR()” on page 186 for more<br />
information.<br />
❏ SERVER_BIND_ADDR (for the embedded performance component)<br />
See “SERVER_BIND_ADDR()” on page 188 for more<br />
information.<br />
Chapter 4 95
Configuring DCE Nodes<br />
Embedded Performance Component<br />
96<br />
Systems Installed with OpenView <strong>Operations</strong> for<br />
Windows<br />
If your managed nodes have an agent installed from the OpenView<br />
<strong>Operations</strong> for Windows management server, the location of the<br />
nodeinfo <strong>and</strong> default.txt files will be different as shown in Table 4-12.<br />
The variables/registry entries OvAgentInstallDir <strong>and</strong> OvDataDir<br />
determine the location.<br />
Table 4-12 Systems Installed with OpenView <strong>Operations</strong> for Windows<br />
Platform Filename Default Location<br />
AIX nodeinfo /conf/OpC/nodeinfo<br />
default.txt /conf/BBC/default.txt<br />
UNIX nodeinfo /conf/OpC/nodeinfo<br />
default.txt /conf/BBC/default.txt<br />
Windows nodeinfo Installed from Windows server<br />
\conf\OpC\nodeinfo<br />
This is usually:<br />
\<strong>HP</strong> OpenView\Installed<br />
Packages\{790C06B4-844E-11D2-972B-080009<br />
EF8C2A}\conf\OpC\nodeinfo<br />
Installed from UNIX server<br />
:\usr\OV\conf\OpC\nodeinfo<br />
default.txt Installed from Windows server<br />
\conf\BBC\default.txt<br />
This is usually:<br />
\<strong>HP</strong> OpenView\Installed<br />
Packages\{790C06B4-844E-11D2-972B-080009<br />
EF8C2A}\conf\BBC\default.txt<br />
Installed from UNIX server<br />
:\usr\OV\conf\BBC\default.txt<br />
Chapter 4
Configuring DCE Nodes<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration<br />
The Service Pack 4 for Checkpoint <strong>Firewall</strong>-1 4.1 introduces predefined<br />
services for RPC content filtering of OVO.<br />
Content Filtering<br />
With RPC content filtering, there is no need to open any ports (specific<br />
ones or ranges) over the firewall. Instead the firewall checks the content<br />
of the connection request information. Since OVO uses RPC for<br />
communication, the RPC application interfaces for the used RPC<br />
services can be specified.<br />
A specific RPC interface is qualified by a known UUID. When an RPC<br />
client wants to establish a connection with an RPC server, it sends a<br />
request to the RPC daemon containing the UUID. The RPC daemon<br />
looks up the endpoint map <strong>and</strong> responds with the port number assigned<br />
to the requested interface.<br />
Checkpoint <strong>Firewall</strong>-1 compares the requested RPC UUID to a service<br />
rule. If the UUID matches the rule, the connection is allowed to pass the<br />
firewall. That way only specific known <strong>and</strong> allowed RPC calls can pass<br />
the firewall.<br />
No ports restriction is involved, instead the firewall only relies on the<br />
content of the RPC connection request.<br />
Chapter 4 97
Configuring DCE Nodes<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration<br />
98<br />
Content Filtering for OVO<br />
Checkpoint <strong>Firewall</strong>-1 supports the following OVO services which can be<br />
chosen from the Services window in Policy Editor (predefined<br />
services) as displayed in Table 4-13.<br />
Table 4-13 Checkpoint <strong>Firewall</strong>-1 Services for OVO<br />
Service Description<br />
<strong>HP</strong>-OpCdistm Agent to Distribution <strong>Manager</strong> for configuration data.<br />
<strong>HP</strong>-OpCmsgrd-std Agent to Message Receiver for sending messages.<br />
<strong>HP</strong>-OpCmsgrd-coa Agent to Message Receiver for bulk transfers.<br />
<strong>HP</strong>-OpCctla Server to agent for st<strong>and</strong>ard requests.<br />
<strong>HP</strong>-OpCctla-cfgpush Server to agent for configuration push.<br />
<strong>HP</strong>-OpCctla-bulk Server to agent for bulk transfers.<br />
<strong>HP</strong>-OpCmsgrd-m2m Server to Server for message forwarding<br />
To use the Content Filtering in Checkpoint <strong>Firewall</strong>-1, the following<br />
service has to be allowed in addition to all application-specific services as<br />
displayed in Table 4-14.<br />
Table 4-14 Filter Rules for Content Filtering of Agent/Server<br />
Communication<br />
Source Destination Service<br />
AGENT SERVER DCE-RPC<br />
SERVER AGENT DCE-RPC<br />
Chapter 4
Configuring DCE Nodes<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration<br />
Using these services, the agent must be configured for RPC distribution,<br />
see “<strong>Configuration</strong> Distribution” on page 87 <strong>and</strong> for non-ICMP, see<br />
“ICMP (DCE Agents Only)” on page 48. The firewall configuration is<br />
similar to what is represented in Table on page 99.<br />
Table 4-15 Filter Rules for Content Filtering of Agent/Server<br />
Communication<br />
Source Destination Service<br />
AGENT SERVER <strong>HP</strong>-OpCdistm<br />
AGENT SERVER <strong>HP</strong>-OpCmsgrd-std<br />
AGENT SERVER <strong>HP</strong>-OpCmsgrd-coa<br />
SERVER AGENT <strong>HP</strong>-OpCctla<br />
SERVER AGENT <strong>HP</strong>-OpCctla-cfgpush<br />
SERVER AGENT <strong>HP</strong>-OpCctla-bulk<br />
For message forwarding, the firewall configuration requires the following<br />
as displayed in Table 4-16.<br />
Table 4-16 Filter Rules for Content Filtering of Server/Server<br />
Communication<br />
Source Destination Service<br />
SERVER 1 SERVER 2 <strong>HP</strong>-OpCmsgrd-m2m<br />
SERVER 2 SERVER 1 <strong>HP</strong>-OpCmsgrd-m2m<br />
Chapter 4 99
Configuring DCE Nodes<br />
Checkpoint <strong>Firewall</strong>-1 4.1 Integration<br />
100<br />
Combining Content Filtering <strong>and</strong> Port Restrictions<br />
To extend security even further, it is possible to add port restriction to<br />
the predefined OVO services for content filtering. To do so, edit the Match<br />
field in <strong>Firewall</strong>-1’s Services Properties for the OVO services. Using<br />
the INSPECT language, the port restrictions can be added to the UUID<br />
check.<br />
The defined ports need to be specified on both the server <strong>and</strong> agent as<br />
documented in “Configuring the OVO Management Server” on page 73<br />
<strong>and</strong> “Configuring OVO Managed Nodes” on page 75.<br />
The new service rule could look like:<br />
dport= or dport=DCERPC_PORT) ,<br />
dcerpc_uuid_ex (IID1, IID2, IID3, IID4)<br />
Port ranges can be specified:<br />
((dport = lo_port) or dport=DCERPC_PORT),<br />
dcerpc_uuid_ex (IID1, IID2, IID3, IID4)<br />
Sets of ports can be specified:<br />
myports = {port1, port2, ..., portn};<br />
(dport in myports or dport=DCERPC_PORT),<br />
dcerpc_uuid_ex (IID1, IID2, IID3, IID4)<br />
In addition to the content filtering rules for the RPC calls, the following<br />
rules <strong>and</strong> agent/server configuration must be completed:<br />
1. On the managed node system, add to the end of the opcinfo file:<br />
OPC_RESTRICT_TO_PROCS opccma<br />
OPC_COMM_PORT_RANGE 13007<br />
2. On the management server system, enter the following comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns opc.ovoareqsdr -set \<br />
CLIENT_PORT 12006-12040<br />
3. On the firewall add the rule rule:<br />
MGMT_SRV -> NODE TCP source 12006-12040 -> dest 13007<br />
Chapter 4
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
Address Translation of Outside Addresses<br />
This is the basic scenario for NAT. Only the outside addresses are<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 4-2.<br />
Figure 4-2 <strong>Firewall</strong> Using NAT<br />
NAT <strong>Firewall</strong><br />
10.136.120.193 inside outside<br />
53.168.1.3<br />
Server<br />
Agent<br />
bear.mycom.com lion.mycom.com<br />
DB<br />
lion.mycom.com 192.168.1.3<br />
192.168.1.3 53.168.1.3<br />
DNS<br />
nodeinfo<br />
OPC_IP_ADDRESS 192.168.1.3<br />
opcinfo<br />
OPC_MGMT_SERVER bear.mycom.com<br />
OPC_AGENT_NAT TRUE<br />
bear.mycom.com 10.136.120.193<br />
NOTE The following manual step is required to allow outside addresses in the<br />
NAT environment:<br />
A flag in the opcinfo file has to be added to make the agent correctly<br />
h<strong>and</strong>le the NAT environment. See “Configuring the Agent for the NAT<br />
Environment” on page 105.<br />
Chapter 4 101
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
102<br />
Address Translation of Inside Addresses<br />
In this scenario, only the inside address (the management server) is<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 4-3.<br />
Figure 4-3 Network Address Translation for an Address Inside the <strong>Firewall</strong><br />
10.136.120.193<br />
Server<br />
bear.mycom.com<br />
DB<br />
lion.mycom.com 192.168.1.3<br />
DNS<br />
bear.mycom.com 10.136.120.193<br />
<strong>Firewall</strong><br />
inside outside<br />
opcinfo<br />
OPC_MGMT_SERVER<br />
respmgr<br />
10.136.120.193 35.136.120.193<br />
192.168.1.3<br />
Agent<br />
lion.mycom.com<br />
nodeinfo<br />
OPC_IP_ADDRESS 192.168.1.3<br />
bear.mycom.com<br />
ACTIONALLOWMANAGER 10.136.120.193<br />
DNS<br />
bear.mycom.com 35.136.120.193<br />
NOTE A manual step is required to make this work:<br />
A responsible managers file must be created on the OVO management<br />
server <strong>and</strong> has to be distributed to the agent. See “Setting Up the<br />
Responsible <strong>Manager</strong>s File” on page 105<br />
Chapter 4
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
Configuring the Agent for the NAT Environment<br />
After the installation of the agent software, the agent has to be<br />
configured to h<strong>and</strong>le the NAT environment correctly. The following line<br />
has to be added to the opcinfo file of the specified agent.<br />
OPC_AGENT_NAT TRUE<br />
The opcinfo file is located in the following location on the managed<br />
node:<br />
AIX: /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX: /opt/OV/bin/OpC/install/opcinfo<br />
Windows: :\usr\OV\bin\OpC\install\opcinfo<br />
Chapter 4 103
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
104<br />
Address Translation of Inside <strong>and</strong> Outside Addresses<br />
This is the combination of the two previous scenarios. The inside <strong>and</strong> the<br />
outside network have a completely different set of IP addresses that get<br />
translated at the firewall. An example of the environment is shown in<br />
Figure 4-4.<br />
Figure 4-4 Network Address Translation for Addresses Inside <strong>and</strong> Outside<br />
the <strong>Firewall</strong><br />
10.136.120.193<br />
Server<br />
bear.mycom.com<br />
DB<br />
lion.mycom.com 192.168.1.3<br />
DNS<br />
bear.mycom.com 10.136.120.193<br />
<strong>Firewall</strong><br />
inside outside<br />
opcinfo<br />
192.168.1.3 53.168.1.3<br />
10.136.120.193 35.136.120.193<br />
53.168.1.3<br />
Agent<br />
lion.mycom.com<br />
OPC_MGMT_SERVER bear.mycom.com<br />
OPC_AGENT_NAT TRUE<br />
respmgr<br />
nodeinfo<br />
OPC_IP_ADDRESS 192.168.1.3<br />
ACTIONALLOWMANAGER<br />
DNS<br />
10.136.120.193<br />
bear.mycom.com 35.136.120.193<br />
Chapter 4
The following manual steps are required:<br />
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
1. A responsible managers file must be created on the OVO<br />
management server <strong>and</strong> has to be distributed to the agent. See<br />
“Setting Up the Responsible <strong>Manager</strong>s File” on page 105.<br />
2. A flag in the opcinfo file has to be added to make the agent h<strong>and</strong>le<br />
the NAT environment correctly. See “Configuring the Agent for the<br />
NAT Environment” on page 105<br />
Configuring the Agent for the NAT Environment<br />
After the installation of the agent software, the agent has to be<br />
configured to h<strong>and</strong>le the NAT environment correctly. The following line<br />
has to be added to the opcinfo file of the specified agent.<br />
OPC_AGENT_NAT TRUE<br />
The opcinfo file is located in the following location on the managed<br />
node:<br />
AIX: /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX: /opt/OV/bin/OpC/install/opcinfo<br />
Windows: :\usr\OV\bin\OpC\install\opcinfo<br />
Setting Up the Responsible <strong>Manager</strong>s File<br />
When the OVO agent receives an action request (application,<br />
operator-initiated or remote automatic action), it checks that the<br />
originating OVO management server process is authorized to send action<br />
requests. This check uses the IP address that is stored in the action<br />
request. Since the NAT firewall cannot change the IP address inside a<br />
data structure, the agent refuses to execute the action.<br />
To solve this issue, a responsible managers file can be set up to authorize<br />
the management server’s actual IP address to execute actions.<br />
The configuration is located at:<br />
/etc/opt/OV/share/conf/OpC/mgmt_sv/respmgrs/c0a80103<br />
where c0a80103 is the node’s hex IP address for the agent. In Figure 4-4<br />
on page 104 this is the hex address for the agent lion.mycom.com<br />
(192.168.1.3). To convert IP addresses from hex to dot representation, the<br />
opc_ip_addr tool can be used. Also the allnodes file can be used if the<br />
same responsible managers file should be used for all OVO agents.<br />
Chapter 4 105
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
106<br />
The file must contain the following lines:<br />
#<br />
# Responsible <strong>Manager</strong> <strong>Configuration</strong>s for a NAT Management<br />
Server<br />
#<br />
RESPMGRCONFIGS<br />
RESPMGRCONFIG<br />
DESCRIPTION "<strong>Configuration</strong> for a NAT Management Server"<br />
SECONDARYMANAGERS<br />
ACTIONALLOWMANAGERS<br />
ACTIONALLOWMANAGER<br />
NODE IP 10.136.120.193 ""<br />
DESCRIPTION "Internally known address"<br />
Distribute this file using the following comm<strong>and</strong>:<br />
opcragt -distrib -templates -force <br />
NOTE For more details on the responsible managers file, refer to ‘Configuring<br />
Multiple management servers <strong>and</strong> MoM Functions’ in the OVO Online<br />
Help.<br />
Chapter 4
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
IP Masquerading or Port Address Translation<br />
IP Masquerading or Port Address Translation (PAT) is a form of Network<br />
Address Translation that allows systems that do not have registered<br />
Internet IP addresses to have the ability to communicate to the Internet<br />
via the firewall system’s single Internet IP address. All outgoing traffic<br />
gets mapped to the single IP address which is registered at the Internet.<br />
Because of the restrictions in targeting connections over the firewall in<br />
both directions (server to agent, agent to server), this is currently not<br />
supported in for DCE agents environments.<br />
Chapter 4 107
Configuring DCE Nodes<br />
DCE Agents <strong>and</strong> Network Address Translation<br />
108<br />
Chapter 4
5 DCE RPC Communication<br />
without Using<br />
Endpoint Mappers<br />
Chapter 5 109
DCE RPC Communication without Using Endpoint Mappers<br />
About this Chapter<br />
110<br />
About this Chapter<br />
Multiple, well-known ports used by DCE are usually considered as an<br />
increased security risk. Security in firewall environments can be<br />
significantly improved by reducing communication to a minimum<br />
number of user-defined ports. OVO communication based on DCE RPC<br />
without using DCE endpoint mappers (port 135) restricts OVO<br />
communication to a total of 2 or 3 user configurable ports. No well-known<br />
ports are used.<br />
This chapter explains the concepts of OVO communication based on DCE<br />
RPC without using DCE endpoint mappers for OVO A.07.x agents for<br />
UNIX. This is an interim solution until OVO replaces DCE RPC by<br />
HTTPS-based communication. It can be applied to DCE agents for OVO<br />
for UNIX A.08.00 <strong>and</strong> OVO for Windows A.7.2x.<br />
NOTE Additional ports are required for OVO configuration deployment <strong>and</strong><br />
performance data acquisition.<br />
Performance data communication is based on HTTP <strong>and</strong> is not discussed<br />
further in this document. For details, please refer to the <strong>HP</strong> OpenView<br />
<strong>Operations</strong> for UNIX <strong>Firewall</strong> <strong>Configuration</strong> White Paper.<br />
NOTE OVO Agents based on NCS-RPC cannot benefit from the enhancements<br />
discussed in this chapter.<br />
NOTE Information contained within this chapter assumes that firewalls have<br />
been established in accordance with the <strong>HP</strong> OpenView <strong>Operations</strong> for<br />
UNIX <strong>Firewall</strong> <strong>Configuration</strong> White Paper <strong>and</strong> that the user is familiar<br />
with OVO <strong>and</strong> firewalls in general.<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Concepts</strong> of Current OVO Communication<br />
<strong>Concepts</strong> of Current OVO Communication<br />
With OVO A.7.x agents, communication between managed nodes <strong>and</strong><br />
management servers depends upon the operating system but is generally<br />
based on DCE RPC. This is true for most UNIX platforms <strong>and</strong> Microsoft<br />
Windows. Microsoft RPC is intrinsically a type of DCE RPC<br />
implementation. More information on this is provided in the <strong>HP</strong><br />
OpenView <strong>Operations</strong> Software Release Notes.<br />
OVO processes acting as RPC servers register at the local DCE endpoint<br />
mapper (RPCD or DCED) to publish their offered services. The next free<br />
port, or one from a configurable range of ports, is selected. An RPC<br />
daemon (RPCD) is a DCE daemon (DCED) with limited functionality, for<br />
example, there are no security features. The RPC Service provides this<br />
functionality for Microsoft Windows.<br />
OVO processes acting as RPC clients first contact the endpoint mapper<br />
on the target node to find the registered server. In either case, the client<br />
is not initially aware of the port that the server is using <strong>and</strong> must<br />
request this information from the DCE endpoint mapper.<br />
This is the same for both OVO on UNIX <strong>and</strong> Windows. There are RPC<br />
servers <strong>and</strong> clients on both management server systems <strong>and</strong> managed<br />
node systems. In addition, there is local DCE RPC communication on the<br />
OVO management server systems <strong>and</strong> managed nodes.<br />
Chapter 5 111
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
112<br />
DCE RPC Communication <strong>Concepts</strong> without<br />
using Endpoint Mappers<br />
The fundamental requirement is to enable communication between OVO<br />
management servers <strong>and</strong> their managed nodes without using DCE RPC<br />
endpoint mappers. To reduce the number of ports in firewall<br />
environments, only communication between managed nodes <strong>and</strong><br />
management server is relevant.<br />
NOTE OVO communication without use of endpoint mappers can only be<br />
applied to DCE communication. It cannot be applied to NCS or ONC.<br />
To implement communication without the use of endpoint mappers, DCE<br />
RPC must use specific ports. These ports can be selected <strong>and</strong> configured<br />
by the user. The RPC servers <strong>and</strong> clients are aware of the ports on which<br />
to register <strong>and</strong> connect.<br />
The behavior whether to use preselected ports or RPCD lookup is<br />
configurable, using ovconfchg comm<strong>and</strong>s on OVO for UNIX<br />
management server systems <strong>and</strong> opcinfo statements on all OVO DCE<br />
managed nodes.<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
Objectives for DCE Communication without Using<br />
Endpoint Mappers for OVO<br />
❏ It must not be possible to query available endpoints from outside<br />
using DCE lookup.<br />
❏ Endpoint mappers (port 135) must not be used for communication<br />
between the management server its managed nodes.<br />
❏ If the DCE endpoint mapper is not needed by any other application<br />
on the managed node, it can be stopped.<br />
❏ OVO RPC servers do not register at the DCE endpoint mapper. The<br />
ports of the RPC servers must be configurable for each OVO RPC<br />
client.<br />
One of two configurations can be selected, where one less port is required<br />
if it is possible to deploy policies manually. These two scenarios are<br />
described in the next sections.<br />
Chapter 5 113
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
114<br />
Port Requirements for Remote Deployment<br />
On the OVO management server:<br />
❏ Request sender (ovoareqsdr) can communicate directly with the<br />
control agent without remote DCE lookup ➊.<br />
❏ Message receiver (opcmsgrd) uses one, customer-defined inbound<br />
port ➋.<br />
❏ Distribution manager (opcdistm) uses one, customer defined<br />
inbound port ➌.<br />
Figure 5-1 Port requirements for Remote Deployment<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
On the OVO managed node:<br />
No endpoint mapper.<br />
❏ Control agent (opcctla) uses one, customer-defined outbound<br />
port ➊.<br />
❏ Message (opcmsga) <strong>and</strong> distribution (opcdista) agents can<br />
communicate directly with the management server without remote<br />
DCE lookup, each using one inbound port ➋ & ➌.<br />
Available remote functionality:<br />
Start action, tools <strong>and</strong> applications.<br />
Start, stop, <strong>and</strong> status of agent.<br />
HBP via RPC only.<br />
Deliver messages, action status, annotations.<br />
Deploy templates, actions, comm<strong>and</strong>s, <strong>and</strong> monitors.<br />
Chapter 5 115
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
116<br />
Port Requirements for Manual Template <strong>and</strong><br />
Instrumentation Deployment<br />
On the OVO management server:<br />
❏ The request sender (ovoareqsdr) can communicate directly with the<br />
control agent (opcctla) without remote DCE lookup ➊.<br />
❏ The message receiver (opcmsgrd) uses one, customer-defined<br />
inbound port ➋.<br />
Figure 5-2 Port requirements for Manual Template <strong>and</strong> Instrumentation<br />
Deployment<br />
Chapter 5
On the managed node:<br />
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
No OVO use of the endpoint mapper.<br />
Control agent (opcctla) uses one, customer-defined outbound<br />
port ➊.<br />
Message agent (opcmsga) can communicate directly to the<br />
management server without remote DCE lookup using one<br />
inbound port ➋.<br />
Manual template <strong>and</strong> instrumentation deployment via opctmpldwn.<br />
Available remote functionality:<br />
❏ Start action, tools <strong>and</strong> applications.<br />
❏ Start, stop, <strong>and</strong> status of agent.<br />
❏ HBP via RPC only.<br />
❏ Deliver messages, action status, annotations.<br />
Chapter 5 117
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
118<br />
Communication <strong>Concepts</strong><br />
Figure 5-3 on page 118 illustrates the base scenario with an RPCD on<br />
the system where the RPC server is running.<br />
Figure 5-3 Communication with RPCD<br />
Key<br />
1. The RPC server starts up. Either the RPC server, when configured on<br />
the OVO management server, or the operating system selects the<br />
port on which the server must listen. The RPC server registers itself<br />
with this port at the RPCD.<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
2. The RPCD stores this information in its database.<br />
3. The RPC client starts but does not know the port number used by the<br />
RPC server. It queries the RPCD with the type of server it wants to<br />
contact <strong>and</strong> some additional interface specifications uniquely<br />
identifying the target server. The RPCD returns the port number.<br />
4. The RPC client can now contact the desired RPC server.<br />
Figure 5-4 illustrates the arrangement without an RPCD on the system<br />
where the RPC server is running:<br />
Figure 5-4 Communication without RPCD<br />
Chapter 5 119
DCE RPC Communication without Using Endpoint Mappers<br />
DCE RPC Communication <strong>Concepts</strong> without using Endpoint Mappers<br />
120<br />
Key:<br />
1. The RPC server starts up. It selects the port at which to listen from<br />
the OVO management server configuration variable<br />
OPC_COMM_PORT_RANGE. It does not register anywhere <strong>and</strong> simply<br />
listens at this port.<br />
2. From its local configuration, the RPC client determines that the RPC<br />
server must be contacted without an RPCD lookup. It has the name<br />
of the server port specification file specified as an opcinfo variable<br />
or set on the OVO management server variables:<br />
OPC_COMM_PORT_MSGR <strong>and</strong> OPC_COMM_PORT_DISTM. In OVO, this<br />
applies to managed node to management server communication.<br />
3. The RPC client searches for the desired RPC server within the server<br />
port specification file, based on the server type <strong>and</strong> target node. The<br />
file entry contains the port where the RPC server should be listening.<br />
In OVO, this applies to two way communication between managed<br />
node <strong>and</strong> management server communication.<br />
4. The RPC client can now directly contact the RPC server.<br />
NOTE Mixed environments are also possible, especially with OVO agents with<br />
<strong>and</strong> without RPCD. In this case, the ovoareqsdr process on the OVO<br />
management server is the RPC Client talking to the Control Agent<br />
processes (opcctla) on the managed node with or without the RPCD<br />
running. For example, the OVO management server node has an RPCD<br />
running whereas other managed nodes may or may not use the RPCD.<br />
Support Restrictions<br />
The following restrictions apply with respect to OVO managed nodes:<br />
❏ Supported platforms are <strong>HP</strong>-UX, Microsoft Windows, Sun Solaris<br />
(DCE), IBM AIX (DCE), Tru64 (DCE), <strong>and</strong> Linux.<br />
❏ NCS or ONC based communication cannot be used this way (this<br />
affects some managed node platforms, see OVO documentation).<br />
❏ MPE managed nodes are not covered.<br />
Chapter 5
OVO Components Affected<br />
DCE RPC Communication without Using Endpoint Mappers<br />
OVO Components Affected<br />
The following RPC relationships between the OVO management server<br />
<strong>and</strong> managed nodes are affected:<br />
Table 5-1 OVO Components Affected<br />
RPC client RPC Server Direction Explanation<br />
opcmsga opcmsgrd Mgd Node ➝<br />
Mgmt Server<br />
opcdista opcdistm Mgd Node ➝<br />
Mgmt Server<br />
ovoareqsdr,<br />
opcragt<br />
opcagt,<br />
opcmsga<br />
opcctla Mgmt Server<br />
➝ Mgd Node<br />
opcctla Locally on<br />
Mgd Node<br />
OVO messages <strong>and</strong><br />
action responses<br />
OVO distribution pull<br />
(initiated by agent)<br />
Action requests,<br />
Control requests,<br />
Heartbeat polling,...<br />
Local control<br />
comm<strong>and</strong>s<br />
Chapter 5 121
DCE RPC Communication without Using Endpoint Mappers<br />
OVO Components Not Affected<br />
122<br />
OVO Components Not Affected<br />
The following local RPC relationships are not affected by the well-known<br />
port mechanism:<br />
Table 5-2 OVO Components Not Affected<br />
RPC client RPC Server Direction Explanation<br />
opcactm,<br />
opcmsgm,...<br />
ovoareqsdr,<br />
opcctlm<br />
opcdispm Locally on<br />
Mgmt Server<br />
opcmsgrd Locally on<br />
Mgmt Server<br />
GUI interaction<br />
initiated by GUIs <strong>and</strong><br />
OVO Server processes<br />
Test whether the<br />
RPCD is accessible<br />
NOTE The DCE endpoint mapper on the OVO management server is needed by<br />
the display manager.<br />
Chapter 5
<strong>Configuration</strong><br />
Setting of Variables for Processes<br />
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
Most parameters for communication without RPCD are configured either<br />
in the opcinfo (managed node) or using the ovconfchg comm<strong>and</strong> on the<br />
the OVO management server. It is sometimes very important to apply<br />
settings to selected processes only. This is done using the following<br />
syntax:<br />
On a DCE managed node, an entry OPC_RESTRICT_TO_PROCS starts a<br />
section that only applies to the specified process. All following entries are<br />
only evaluated for this one process. A second OPC_RESTRICT_TO_PROCS<br />
entry starts a section that only applies to the next specified process.<br />
All entries that should apply to all processes must be specified before the<br />
first occurrence of OPC_RESTRICT_TO_PROCS.<br />
Example:<br />
OPC_COMM_RPC_PORT_FILE /tmp/port_conf.txt<br />
OPC_RESTRICT_TO_PROCS opcctla<br />
OPC_COMM_PORT_RANGE 5001<br />
In this case, the specified server port specification file is valid for all<br />
processes, while the port setting is valid only for the control agent<br />
(opcctla).<br />
On the OVO management server, use the comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns opc. -set \<br />
<br />
For example:<br />
ovconfchg -ovrg server -ns opc.opcmsgrd -set \<br />
OPC_COMM_PORT_RANGE 12345<br />
Chapter 5 123
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
124<br />
Configuring Managed Nodes<br />
RPC Clients<br />
RPC clients are:<br />
❏ Message Agent - opcmsga<br />
❏ Distribution Agent - opcdista<br />
❏ Control comm<strong>and</strong>:<br />
UNIX opcctla<br />
Windows opcagt<br />
The settings for the opcinfo file on managed node are described in<br />
Table 5-3.<br />
The settings OPC_COMM_PORT_MSGR <strong>and</strong> OPC_COMM_PORT_DISTM can be<br />
used if one of the following cases is valid:<br />
❏ Only one management server will be contacted.<br />
or<br />
❏ All management servers are configured to use the same RPC server<br />
ports.<br />
NOTE If managed nodes that are configured to run without the DCE endpoint<br />
mapper are members of a high availability cluster, you must use the<br />
same port settings for ALL cluster nodes managed by OVO (physical <strong>and</strong><br />
virtual). This applies to the PORT_RANGE value used by the opcctla <strong>and</strong><br />
therefore also for the port configuration file on the OVO Server.<br />
If the OVO management server runs as a high availability cluster<br />
application, make sure that all possible cluster members where the OVO<br />
management server may run use the same port settings (PORT_RANGE<br />
value of opcmsgrd <strong>and</strong> opcdistm).<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
Table 5-3 opcinfo File Settings: Single Management Server<br />
Key Type Default Explanation<br />
OPC_COMM_PORT_MSGR Integer 0 Specifies the port on the Management<br />
Server to which the Message Receiver<br />
(opcmsgrd) is listening.<br />
Enter port number.<br />
OPC_COMM_PORT_DISTM Integer 0 Specifies the port on the Management<br />
Server to which the Distribution<br />
<strong>Manager</strong> (opcdistm) is listening.<br />
Enter port number.<br />
If multiple management servers with different port settings are used, a<br />
dedicated server port specification file must be configured using<br />
OPC_COMM_RPC_PORT_FILE because opcinfo only supports name/value<br />
pairs.<br />
Table 5-4 opcinfo File Settings: Multiple Management Servers<br />
Key Type Default Explanation<br />
OPC_COMM_RPC_PORT_FILE String Empty May contain a complete path pointing<br />
to a server port specification file for<br />
multiple OVO servers as described in<br />
the example below.<br />
Enter path <strong>and</strong> name of server port<br />
specification file.<br />
Chapter 5 125
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
126<br />
Example of a port configuration file<br />
#<br />
# SelectionCriteria SrvType<br />
#<br />
Port Node<br />
--------------------------------------------------------------<br />
NODE_NAME opcmsgrd 5000 primaryserver.hp.com<br />
NODE_NAME opcdistm 5001 primaryserver.hp.com<br />
NODE_NAME opcmsgrd 6000 backupserver.hp.com<br />
NODE_NAME opcdistm 6001 backupserver.hp.com<br />
The server port specification file, if configured, is evaluated first. If not,<br />
the two additional opcinfo values are evaluated. If these have a value of<br />
0, they are considered as not set.<br />
Table 5-5 opcinfo File Settings: RPCD Lookup<br />
Key Type Default Explanation<br />
OPC_COMM_LOOKUP_RPC_SRV Boolean TRUE Whether or not to perform an RPCD<br />
lookup if no matching port has been<br />
found during the previous steps.<br />
Set to FALSE.<br />
OPC_COMM_PORT_RANGE String Empty Specifies an RPC client port range.<br />
Can be configured per process. Can be<br />
set for the Message Agent (opcmsga)<br />
<strong>and</strong> the Distribution Agent<br />
(opcdista).<br />
OPC_COMM_PORT_RANGE variable does<br />
not work for RPC clients on Microsoft<br />
Windows platforms.<br />
If required, enter port range.<br />
Chapter 5
RPC Server<br />
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
The RPC server is the Control Agent - opcctla<br />
On OVO managed nodes, it is possible to completely disable the RPCD,<br />
unless it is required by other applications.<br />
The settings for the opcinfo (or nodeinfo) file on managed node are<br />
described in Table 5-6.<br />
Table 5-6 opcinfo File Settings: Register RPC Server<br />
Key Type Default Explanation<br />
OPC_COMM_REGISTER_RPC_SRV Boolean TRUE Selects whether to register RPC<br />
server interfaces with RPCD.<br />
Set to FALSE.<br />
OPC_COMM_PORT_RANGE String Empty Specifies ports to be used by the<br />
RPC server. Must be set per<br />
process. It applies to the control<br />
agent (opcctla).<br />
Enter one port value when using<br />
environments without RPCD.<br />
Chapter 5 127
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
NOTE When installing the OVO agent using the regular OVO agent software<br />
distribution mechanism to a target managed node where the DCE RPCD<br />
has already been stopped, you may receive the following warnings, which<br />
you can safely ignore. After configuring the opcinfo file on the managed<br />
node, start the OVO agent manually.<br />
128<br />
Checking if NCS or DCE-RPC packages are installed <strong>and</strong> if either<br />
Local Location Broker (llbd) or DCE-RPC daemon (rpcd/dced) is<br />
running properly on .<br />
WARNING: DCE-RPC Daemon (rpcd/dced) is not running on system<br />
, but required to run VPO; Start it <strong>and</strong> integrate the<br />
startup in the appropriate system boot file.<br />
WARNING: Automatic (re-)start option of VPO services on<br />
will be ignored due to detected NCS / DCE-RPC<br />
problems.<br />
CAUTION Make sure, that the configured OPC_COMM_PORT_RANGE for the Control<br />
Agent (opcctla) contains not more than one port <strong>and</strong> applies exclusively<br />
to this process.<br />
If needed, you may configure a separate port range for other processes.<br />
Example opcinfo or nodeinfo File <strong>Configuration</strong><br />
OPC_COMM_PORT_MSGR 5000<br />
OPC_COMM_PORT_DISTM 5001<br />
OPC_COMM_LOOKUP_RPC_SRV FALSE<br />
OPC_COMM_REGISTER_RPC_SRV FALSE<br />
OPC_RESTRICT_TO_PROCS opcctla<br />
OPC_COMM_PORT_RANGE 12345<br />
Chapter 5
Configuring Management Servers<br />
RPC Clients<br />
RPC clients are:<br />
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
❏ Request Sender - ovoareqsdr<br />
❏ Remote Agent Tool - opcragt<br />
The OVO for UNIX management servers can configured with the<br />
settings described in Table 5-7.<br />
Table 5-7 <strong>Configuration</strong> Settings: RPC Clients on OVO Management<br />
Servers<br />
Key Type Default Explanation<br />
OPC_COMM_RPC_PORT_FILE String Empty May contain a complete path pointing<br />
to a server port specification file for<br />
multiple OVO servers as described in<br />
the example below.<br />
Enter path <strong>and</strong> name of server port<br />
specification file.<br />
OPC_COMM_LOOKUP_RPC_SRV Boolean TRUE Whether or not to perform an RPCD<br />
lookup if no matching port has been<br />
found during the previous steps.<br />
Set to FALSE only if all managed<br />
nodes can be contacted without RPCD<br />
Lookup.<br />
OPC_COMM_PORT_RANGE String Empty Specifies an RPC client port range.<br />
Can be configured per process, in<br />
particular for the Request Sender<br />
(ovoareqsdr) <strong>and</strong> the Remote Agent<br />
Tool (opcragt).<br />
If required, enter port range.<br />
Chapter 5 129
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
130<br />
Comm<strong>and</strong>s Examples for Setting Port on an OVO Management<br />
Server.<br />
ovconfchg -ovrg server -ns opc -set OPC_COMM_RPC_PORT_FILE \<br />
etc/opt/OV/share/conf/OpC/mgmt_sv/<br />
RPC Servers<br />
RPC servers are:<br />
❏ Message Receiver - opcmsgrd<br />
❏ Distribution <strong>Manager</strong> - opcdistm<br />
On the OVO management server, the RPCD must be left running but the<br />
RPC servers called from managed nodes (opcmsgrd <strong>and</strong> opcdistm) can<br />
be configured to register at fixed ports, so that agents can contact them<br />
without querying the RPCD.<br />
The OVO management servers can contain the following settings:<br />
Table 5-8 <strong>Configuration</strong> Settings: RPC Servers on OVO Management<br />
Servers<br />
Key Type Default Explanation<br />
OPC_COMM_REGISTER_RPC_SRV Boolean TRUE Selects whether to register RPC<br />
interfaces with RPCD. Must be set<br />
per process.<br />
May only be used for the Message<br />
Receiver (opcmsgrd) <strong>and</strong><br />
Distribution <strong>Manager</strong> (opcdistm).<br />
If TRUE, managed nodes can send<br />
messages in the usual way,<br />
otherwise all managed nodes must<br />
have a dedicated server port<br />
configuration to be able to reach the<br />
OVO management server.<br />
Set to FALSE only if all managed<br />
nodes can contact the RPC servers<br />
on the OVO management server<br />
without RPCD Lookup.<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
Table 5-8 <strong>Configuration</strong> Settings: RPC Servers on OVO Management<br />
Servers (Continued)<br />
Key Type Default Explanation<br />
OPC_COMM_PORT_RANGE String Empty Specifies the one port to be used by<br />
the RPC server. Must be set per<br />
process.<br />
It must be set for the Message<br />
Receiver (opcmsgrd) <strong>and</strong><br />
Distribution <strong>Manager</strong> (opcdistm).<br />
Enter one port value for each<br />
process.<br />
CAUTION Do not apply OPC_COMM_REGISTER_RPC_SRV with FALSE to the Display<br />
<strong>Manager</strong> (opcdispm). This process must register at the RPCD, otherwise<br />
local RPC communication on the OVO Management Server will fail.<br />
Do not apply OPC_COMM_LOOKUP_RPC_SRV with FALSE to any other OVO<br />
Server processes than ovoareqsdr <strong>and</strong> opcragt.<br />
NOTE After you have completed all configuration steps, you must stop <strong>and</strong> start<br />
both the opc <strong>and</strong> ovoacomm processes. opcmsgrd is one of the ovoacomm<br />
processes <strong>and</strong> is stopped <strong>and</strong> started with the comm<strong>and</strong>s:<br />
ovstop ovoacomm<br />
ovstart opc or opcsv -start<br />
Chapter 5 131
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
132<br />
Example <strong>Configuration</strong><br />
ovconfchg -ovrg server -ns opc.ovoareqsdr -set \<br />
OPC_COMM_RPC_PORT_FILE /opt/OV/dce.ports<br />
ovconfchg -ovrg server -ns opc.opcragt -set \<br />
OPC_COMM_RPC_PORT_FILE /opt/OV/dce.ports<br />
ovconfchg -ovrg server -ns opc.opcmsgrd -set \<br />
OPC_COMM_PORT_RANGE 5000<br />
ovconfchg -ovrg server -ns opc.opcdistm -set \<br />
OPC_COMM_PORT_RANGE 5001<br />
The following is displayed when you call the above comm<strong>and</strong>s:<br />
[opc.ovoareqsdr]<br />
OPC_COMM_PORT_FILE = /opt/OV/dce.ports<br />
[opc.opcragt]<br />
OPC_COMM_PORT_FILE = /opt/OV/dce.ports<br />
[opc.opcmsgrd]<br />
OPC_COMM_PORT_RANGE = 5000<br />
[opc.opcdistm]<br />
OPC_COMM_PORT_RANGE = 5001<br />
Chapter 5
Server Port Specification File<br />
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
The server port specification file must be used on the management server<br />
to specify control agent ports for managed nodes.<br />
The server port specification file on the managed node may be used if<br />
there are multiple management servers <strong>and</strong> they are not configured to<br />
use the same RPC server port.<br />
NOTE St<strong>and</strong>ard OVO patterns can be used.<br />
Patterns without anchoring match values may be prefixed or suffixed by<br />
anything.<br />
If a file is configured, the RPC client reads the file before opening a<br />
connection to a RPC server <strong>and</strong> matches the server type <strong>and</strong> target node<br />
through the list of entries. The first match terminates the operation.<br />
The variable OPC_COMM_LOOKUP_RPC_SRV decides whether to perform an<br />
RPCD lookup, if:<br />
❏ No match is found.<br />
❏ Server port specification file does not exist.<br />
❏ Server port specification file has not been configured.<br />
If an RPCD lookup is not performed or it fails, the communication failure<br />
is h<strong>and</strong>led in the usual way.<br />
It is recommended that the file is protected by applying the appropriate<br />
operating system access mode. The file will only be read by the OVO<br />
processes <strong>and</strong> the most restrictive permission setting would be:<br />
UNIX -r-------- 1 root sys <br />
In the case that the OVO agent is not run under the<br />
user root, the file owner should be appropriately set for<br />
that user.<br />
Windows Allow:Read for SYSTEM user.<br />
Chapter 5 133
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
Figure 5-5 Properties of the Port Specification File<br />
134<br />
The location of the file can be defined as needed, but it is recommended<br />
to put it into a static OVO configuration directory, for example:<br />
Management Server /etc/opt/OV/share/conf/OpC/mgmt_sv/<br />
Managed Node /var/opt/OV/conf/OpC/<br />
<strong>and</strong> give the file an appropriately descriptive name.<br />
Chapter 5
File Syntax<br />
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
❏ Empty lines are accepted.<br />
❏ Comments start with # but must be the very first character in the<br />
line. A line containing configuration data must not have trailing<br />
comments.<br />
❏ <strong>Configuration</strong> data must be specified using 4 st<strong>and</strong>ard elements,<br />
separated with white spaces:<br />
SelectionCriteria<br />
NODE_NAMENode name pattern or exact match<br />
NODE_ADDRESS IP Addresses pattern or exact match<br />
SrvType<br />
opcctlaManagement Server contacting the Agent<br />
opcmsgrdMessage Agent contacting the Management Server<br />
opcdistmDistribution Agent contacting the Management Server<br />
PortPort number to contact this RPC server<br />
NodeNode name or address pattern for this rule depending on<br />
whether NODE_NAME or NODE_ADDRESS is specified in<br />
SelectionCriteria<br />
Example of an opcsvinfo File<br />
#<br />
# SelectionCriteria SrvType<br />
#<br />
Port Node<br />
--------------------------------------------------------------<br />
NODE_NAME opcctla 12345 .hp.com<br />
NODE_ADDRESS opcctla 12346 15.136.<br />
NODE_ADDRESS<br />
10>.<br />
opcctla 12347 ^192.
DCE RPC Communication without Using Endpoint Mappers<br />
<strong>Configuration</strong><br />
136<br />
File Modification Test<br />
The RPC client checks the server port specification file <strong>and</strong> re-loads it if<br />
it has been changed. This is indicated by a different size <strong>and</strong>/or<br />
modification time. The RPC client then opens a connection to an RPC<br />
server <strong>and</strong> attempts to match the server type <strong>and</strong> target node with the<br />
the list of entries. The first match terminates the operation. If no match<br />
is found, or the file does not exist or has not been configured, the variable<br />
OPC_COMM_LOOKUP_RPC_SRV decides whether to perform an RPCD<br />
lookup.<br />
A configured port value of 0 is equivalent to no matching entry being<br />
found <strong>and</strong> causes the RPC client to perform a regular RPCD lookup<br />
(unless disabled entirely). This can be used similarly to OVO suppress<br />
conditions to initially specify an entry to filter out all nodes (by<br />
pattern) that still have an RPCD running. All other nodes that do not<br />
match are compared with the entries in the file.<br />
Chapter 5
Internal Process H<strong>and</strong>ling<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Internal Process H<strong>and</strong>ling<br />
RPC clients on managed nodes perform the following steps upon<br />
connecting to an RPC server. A key to the diagram can be found below.<br />
Figure 5-6 Internal Process H<strong>and</strong>ling for RPC Clients<br />
Chapter 5 137
DCE RPC Communication without Using Endpoint Mappers<br />
Internal Process H<strong>and</strong>ling<br />
138<br />
Find Target Server <strong>and</strong> Node is the process of matching the target<br />
RPC server with the server port specification file.<br />
Regular RPC Lookup is the process of querying the RPCD on the<br />
target system.<br />
Some of the decisions made in the flow chart above are implemented by<br />
evaluating opcinfo variables.<br />
❏ Port file configured?OPC_COMM_RPC_PORT_FILE<br />
❏ Rpcd lookup enabled?OPC_COMM_LOOKUP_RPC_SRV<br />
On managed node?<br />
Will be answered with yes by all RPC clients running on the managed<br />
node, i.e. opcmsga, opcdista <strong>and</strong> opcagt.<br />
Special RPC Server?<br />
Will be answered with yes if the RPC client wants to connect to either of<br />
the following RPC servers. Next, if set, the value of the associated<br />
variable will be used by the RPC client as port of the target RPC server:<br />
Table 5-9 RPC Servers <strong>and</strong> Associated Variables<br />
RPC Server Key<br />
Control agent (opcctla) OPC_COMM_PORT_RANGE<br />
as applicable to opcctla<br />
Message receiver (opcmsgrd OPC_COMM_PORT_MSGR<br />
Distribution manager (opcdistm) OPC_COMM_PORT_DISTM<br />
NOTE The flow chart indicates that the Special RPC Server decision applies<br />
only to RPC clients running on managed nodes.<br />
Chapter 5
Variable Reference<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Variable Reference<br />
The following settings are valid in the agent’s opcinfo (or nodeinfo) file:<br />
Table 5-10 Managed Node opcinfo Variables<br />
Key Value Explanation<br />
OPC_COMM_REGISTER_RPC_SRV FALSE The opcctla does not register at RPCD.<br />
OPC_COMM_LOOKUP_RPC_SRV FALSE Whether or not to perform an rpcd lookup if no<br />
matching port has been found during a<br />
manual configuration.<br />
OPC_COMM_PORT_RANGE One<br />
number per<br />
RPC server<br />
OPC_COMM_PORT_RANGE Range for<br />
RPC clients<br />
OPC_COMM_RPC_PORT_FILE Full path<br />
or not set<br />
OPC_COMM_PORT_MSGR One<br />
number<br />
OPC_COMM_PORT_DISTM One<br />
number<br />
Must be set for opcctla. Specifies the port on<br />
which the RPC server listens.<br />
Must match the server port specification file<br />
on the OVO management server.<br />
OPC_COMM_PORT_RANGE variable does not work<br />
on Microsoft Windows platforms.<br />
If required, enter port range.<br />
Port Range to use for the RPC clients when<br />
establishing a connection to the RPC server.<br />
Can be set for opcmsga <strong>and</strong> opcdista.<br />
If set, it points to a server port specification<br />
file, as described earlier.<br />
If not set, the following two variables must be<br />
set.<br />
Port where opcmsgrd is listening, must match<br />
configuration on OVO management server.<br />
Port where opcdistm is listening, must match<br />
configuration on OVO management server.<br />
Chapter 5 139
DCE RPC Communication without Using Endpoint Mappers<br />
Variable Reference<br />
The following settings are valid on the OVO management server:<br />
Table 5-11 Management Server Variables<br />
140<br />
Key Value Explanation<br />
OPC_COMM_REGISTER_RPC_SRV TRUE<br />
or<br />
FALSE<br />
OPC_COMM_PORT_RANGE One<br />
number per<br />
RPC server<br />
OPC_COMM_PORT_RANGE Range for<br />
RPC clients<br />
Registering at the RPCD by the OVO RPC<br />
servers is optional. Value can be set to FALSE<br />
specifically for opcmsgrd <strong>and</strong> opcdistm only.<br />
In this case, all managed nodes must have<br />
RPC server ports configured. If set to TRUE,<br />
st<strong>and</strong>ard managed nodes will continue to<br />
work <strong>and</strong> managed nodes with configured<br />
RPC server ports will use those <strong>and</strong> will not<br />
perform a RPCD lookup. This variable must<br />
not be set for opcdispm.<br />
Must be set for opcmsgrd <strong>and</strong> opcdistm.<br />
Specifies the port on which the RPC server<br />
listens. If OPC_COMM_REGISTER_RPC_SRV is set<br />
to FALSE, the ports specified here must be<br />
configured on all managed nodes.<br />
Port range to use for RPC clients when<br />
establishing a connection to the RPC server.<br />
Can be set for ovoareqsdr <strong>and</strong> opcragt.<br />
OPC_COMM_PORT_RANGE variable does not work<br />
on Microsoft Windows platforms.<br />
If required, enter port range.<br />
OPC_COMM_RPC_PORT_FILE Full path Must point to a server port specification file,<br />
as described earlier. This file must contain<br />
ports for all managed nodes without RPCD<br />
running. It may also contain settings for<br />
managed nodes with RPCD. In this case, no<br />
RPCD lookup takes place.<br />
OPC_COMM_LOOKUP_RPC_SRV TRUE<br />
or<br />
FALSE<br />
Value should be TRUE if there are managed<br />
nodes with RPCD that are not specified in the<br />
server port specification file. If there are no<br />
such nodes, value may be FALSE.<br />
Chapter 5
Examples<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Examples<br />
Figure 5-7 illustrates the scenario where the managed nodes are<br />
managed by one OVO management server.<br />
Figure 5-7 Environment with one OVO Management Server<br />
NOTE The names of the variables are shortened for better readability, typically<br />
they start with OPC_. See the table Table 5-11 for complete names.<br />
Figure NOTE illustrates the scenario where the managed nodes are<br />
managed by more than one OVO management server.<br />
Chapter 5 141
DCE RPC Communication without Using Endpoint Mappers<br />
Examples<br />
NOTE Environment with many OVO Management Servers<br />
NOTE The names of the variables are shortened for better readability, typically<br />
they start with OPC_. See the table Table 5-11 for complete names.<br />
142<br />
Chapter 5
Troubleshooting<br />
Diagnostics<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
The following errors, if reported after an operation, can indicate the<br />
reason of the error:<br />
Cannot connect to RPC service at system<br />
'ncadg_ip_udp:15.136.120.88[22222]'. Local port configuration has<br />
been consulted - rpcd/llbd on remote system not queried.<br />
(OpC20-186)<br />
Communication errors always contain this form of message, when a local<br />
port configuration has been used by an RPC client to connect the server.<br />
If this text is NOT part of the error message, a regular RPCD lookup has<br />
been performed. The stated RPC binding contains the used protocol<br />
sequence (TCP or UDP) as well as the target IP address <strong>and</strong> the port<br />
where the RPC server had been expected.<br />
Cannot find port for RPC service 'opcctla' at '15.136.120.88' in<br />
local configuration. (OpC20-187)<br />
There was no port configuration found for the specified service (either in<br />
the server port specification file, opc(sv)info or anywhere else) <strong>and</strong><br />
OPC_COMM_LOOKUP_RPC_SRV is false, that is NO RPCD lookup has been<br />
performed.<br />
The port configuration file /xxxx contains an invalid entry: '...<br />
some syntax error ..'. (OpC20-174)<br />
The file /xxxx contains bad entries. The value of<br />
OPC_COMM_LOOKUP_RPC_SRV is irrelevant <strong>and</strong> NO RPCD lookup has been<br />
performed.<br />
If no local port configuration for a target service has been found <strong>and</strong><br />
OPC_COMM_LOOKUP_RPC_SRV is TRUE (default) the traditional behavior of<br />
looking up the target RPC server at the RPCD applies. Use tracing for<br />
further troubleshooting.<br />
Chapter 5 143
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
NOTE It may now take much longer for RPC calls to time out if the called RPC<br />
server is not listening, particularly over UDP. The RPC client must now<br />
attempt to find a server, whereas before, the endpoint mapper was<br />
usually running <strong>and</strong> immediately returned the information about the<br />
desired RPC <strong>and</strong> whether it was registered or not.<br />
144<br />
If you receive the following error on the OVO for UNIX management<br />
server when starting the OVO server processes using ovstart:<br />
opc279DONEllbd/rpcdaemon is not running<br />
Make sure that the RPCD is running. If you have set the variable<br />
OPC_COMM_LOOKUP_RPC_SRV to FALSE for ovoareqsdr, make sure that<br />
there is also an entry for the local opcmsgrd in the port configuration file.<br />
If the start process of the OVO server processes takes an abnormally long<br />
time <strong>and</strong> eventually fails, verify, that opcdispm is registered at the<br />
RPCD <strong>and</strong> OPC_COMM_LOOKUP_RPC_SRV is not set to FALSE for any other<br />
processes other than ovoareqsdr <strong>and</strong> opcragt.<br />
Chapter 5
Tracing<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
The new <strong>and</strong> modified functionality contains trace/debug statements to<br />
be used as all other OV tracing. Particularly the DEBUG areas CONF <strong>and</strong><br />
COMM are of interest since they cover the evaluation of configurable<br />
variables <strong>and</strong> DCE communication. The DEBUG area FILE might be<br />
interesting to track the detection of a modified server port specification<br />
file.<br />
To enable tracing for these areas on a DCE managed node, in the<br />
opcinfo file, set:<br />
OPC_TRACE TRUE<br />
OPC_TRACE_AREA ALL,DEBUG<br />
OPC_DBG_AREA CONF,COMM<br />
To enable tracing on an OVO 8.0 management server, execute the<br />
following steps:<br />
1. Start the Windows Trace GUI.<br />
2. Select the processes to trace.<br />
3. Select component opc.debug <strong>and</strong> set to max.<br />
Possibly restrict this to the processes of interest. These are, in general,<br />
all RPC servers <strong>and</strong> clients in the RPC relationships as listed earlier.<br />
For further information about tracing, refer to the <strong>HP</strong> OpenView<br />
<strong>Operations</strong> Tracing <strong>Concepts</strong> <strong>and</strong> User’s Guide.<br />
NOTE The rpccp/opcrpccp program (show mapping sub-comm<strong>and</strong>) will not<br />
show RPC servers for which the OPC_COMM_REGISTER_RPC_SRV setting is<br />
FALSE. Furthermore, this program will fail altogether if the RPCD/DCED<br />
is not running on the target node.<br />
RPC Servers log the following type of information, if RPCD registration is enabled:<br />
... opcmsgrd(...)[DEBUG]: COMM: Register manager<br />
... opcmsgrd(...)[DEBUG]: CONF: Returning value 'TRUE' for key<br />
'OPC_COMM_REGISTER_RPC_SRV<br />
... opcmsgrd(...)[DEBUG]: COMM: Checking hostname '' for replacement.<br />
... opcmsgrd(...)[INIT]: Regarding the setting of OPC_IP_ADDRESS<br />
... opcmsgrd(...)[DEBUG]: CONF: No value found for key 'OPC_IP_ADDRESS'<br />
... opcmsgrd(...)[DEBUG]: COMM: Using '15.139.88.156' as local address.<br />
... opcmsgrd(...)[DEBUG]: COMM: Returning '15.139.88.156'.<br />
Chapter 5 145
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
... opcmsgrd(...)[DEBUG]: COMM: Lookup Srv<br />
... opcmsgrd(...)[DEBUG]: COMM: Server lookup using rpcd interface.<br />
... opcmsgrd(...)[DEBUG]: COMM: Element lookup initialized<br />
... opcmsgrd(...)[DEBUG]: CONF: Returning value '13' for key 'OPC_MAX_PORT_RETRIES'<br />
... opcmsgrd(...)[DEBUG]: COMM: Got another element<br />
... opcmsgrd(...)[DEBUG]: COMM: Srv lookup using rpcd done. NumSrv = 0. rc = 0.<br />
... opcmsgrd(...)[DEBUG]: COMM: Register manager.<br />
... opcmsgrd(...)[DEBUG]: COMM: rpc_ep_register for binding '0' successful<br />
... opcmsgrd(...)[DEBUG]: COMM: rpc_ep_register for binding '1' successful [...]<br />
... opcmsgrd(...)[INT]: Entering RPC server loop ...<br />
RPC Servers log the following type of information, if RPCD registration is disabled:<br />
... opcctla(...)[DEBUG]: COMM: Register manager<br />
... opcctla(...)[DEBUG]: CONF: Returning value 'FALSE' for key<br />
'OPC_COMM_REGISTER_RPC_SRV'<br />
... opcctla(...)[DEBUG]: COMM: Register manager.<br />
... opcctla(...)[DEBUG]: COMM: Register manager<br />
... opcctla(...)[DEBUG]: CONF: Returning value 'FALSE' for key<br />
'OPC_COMM_REGISTER_RPC_SRV' [...]<br />
... opcctla(...)[DEBUG]: COMM: Entering RPC main loop ...<br />
RPC clients on the OVO management server log the following type of information:<br />
... opcragt(...)[DEBUG]: COMM: Connecting with address: 15.136.120.88<br />
... opcragt(...)[DEBUG]: COMM: Getting server port for: opcctla on host:<br />
'15.136.120.88'<br />
... opcragt(...)[DEBUG]: CONF: Returning value '/tmp/ports.tge' for key<br />
'OPC_COMM_RPC_PORT_FILE'<br />
... opcragt(...)[DEBUG]: COMM: Examining external client port file /tmp/ports.tge ...<br />
... opcragt(...)[DEBUG]: FILE: File '/tmp/ports.tge' has been modified: -1/0 -<br />
429/1036055139.<br />
... opcragt(...)[DEBUG]: COMM: Re-loading external client port file ...<br />
... opcragt(...)[DEBUG]: COMM: Server port config line: 'NODE_ADDRESS opcctla 22222<br />
12.111.144.11'.<br />
... opcragt(...)[DEBUG]: COMM: Server port config line: 'NODE_NAME opcctla 22222<br />
^fred..hp.com$'.<br />
... opcragt(...)[DEBUG]: COMM: Server port config line: 'NODE_NAME opcctla 0<br />
tcbbn056.bbn'.<br />
... opcragt(...)[DEBUG]: COMM: Server port config line: 'NODE_ADDRESS opcctla 22223<br />
^15.13.'.<br />
... opcragt(...)[DEBUG]: COMM: Activating external client port file. 4 entries.<br />
... opcragt(...)[DEBUG]: COMM: Searching server port for: opcctla at '15.136.120.88'<br />
(loaded from/tmp/ports.tge).<br />
... opcragt(...)[DEBUG]: COMM: Server entry[0] match by srv type: opcctla - opcctla.<br />
... opcragt(...)[DEBUG]: COMM: Server entry[0] match by IP address 15.136.120.88(1/1).<br />
... opcragt(...)[DEBUG]: COMM: Matching (direct) '15.136.120.88' against<br />
146<br />
Chapter 5
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
pattern'15.136.120.88'..<br />
... opcragt(...)[DEBUG]: COMM: Match: TRUE.<br />
... opcragt(...)[DEBUG]: COMM: Matched IP address opcctla at 15.136.120.88 -> 22222.<br />
... opcragt(...)[DEBUG]: COMM: Got server port for: opcctla at 15.136.120.88 from<br />
external port config file: 22222<br />
... opcragt(...)[DEBUG]: COMM: Checking hostname '15.136.120.88' for replacement.<br />
... opcragt(...)[DEBUG]: COMM: Returning '15.136.120.88'.<br />
... opcragt(...)[INIT]: Regarding the setting of OPC_IP_ADDRESS<br />
... opcragt(...)[DEBUG]: CONF: No value found for key 'OPC_IP_ADDRESS'<br />
... opcragt(...)[DEBUG]: COMM: Using '15.139.88.156' as local address.<br />
... opcragt(...)[DEBUG]: COMM: Connection to non-local node. Using long timeout.<br />
... opcragt(...)[DEBUG]: COMM: Checking server. Mgr type: 0x0<br />
... opcragt(...)[DEBUG]: COMM: Binding: ncadg_ip_udp:15.136.120.88[22222]<br />
... opcragt(...)[DEBUG]: CONF: Returning value '13' for key 'OPC_MAX_PORT_RETRIES'<br />
... opcragt(...)[DEBUG]: COMM: Checking whether server is listening ...<br />
... opcragt(...)[DEBUG]: COMM: Checking server: succeeded. st=0 rpc_rc=1<br />
Chapter 5 147
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
RPC clients on the managed node log the following type of information:<br />
... opcmsga(...)[INIT]: Connecting message receiver on 260790428 ...<br />
... opcmsga(...)[DEBUG]: COMM: Connecting with address: 15.139.88.156<br />
... opcmsga(...)[DEBUG]: COMM: Getting server port for: opcmsgrd on host:<br />
'15.139.88.156'<br />
... opcmsga(...)[DEBUG]: CONF: Returning value '/tmp/ports.tge' for key<br />
'OPC_COMM_RPC_PORT_FILE'<br />
... opcmsga(...)[DEBUG]: COMM: Examining external client port file /tmp/ports.tge ...<br />
... opcmsga(...)[DEBUG]: COMM: Re-loading external client port file ...<br />
... opcmsga(...)[DEBUG]: COMM: Activating external client port file. 0 entries.<br />
... opcmsga(...)[DEBUG]: COMM: Searching server port for: opcmsgrd at '15.139.88.156'<br />
... opcmsga(...)[DEBUG]: CONF: Returning value '51528' for key 'OPC_COMM_PORT_MSGR'<br />
... opcmsga(...)[DEBUG]: COMM: Got opcmsgrd server port from<br />
opc/nodeinfo[OPC_COMM_PORT_MSGR]: 51528.<br />
... opcmsga(...)[DEBUG]: COMM: Checking hostname '15.139.88.156' for replacement.<br />
... opcmsga(...)[DEBUG]: COMM: Returning '15.139.88.156'.<br />
... opcmsga(...)[DEBUG]: COMM: Connection to non-local node. Using long timeout.<br />
... opcmsga(...)[DEBUG]: COMM: Checking server. Mgr type: 0x0<br />
... opcmsga(...)[DEBUG]: COMM: Binding: ncadg_ip_udp:15.139.88.156[51528]<br />
... opcmsga(...)[DEBUG]: CONF: Returning value '13' for key 'OPC_MAX_PORT_RETRIES'<br />
... opcmsga(...)[DEBUG]: COMM: Checking whether server is listening ...<br />
... opcmsga(...)[DEBUG]: COMM: Checking server: succeeded. st=0 rpc_rc=1<br />
148<br />
Chapter 5
Testing<br />
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
To verify that a configuration is correct, check the following:<br />
1. Start all OVO processes<br />
2. Use opcrpccp to verify that registration of the OVO RPC servers on<br />
both the managed node <strong>and</strong> the management server is correct. Make<br />
sure the correct ports are used as configured. If there is no RPCD<br />
running on the target system, this comm<strong>and</strong> will fail entirely (which<br />
is correct).<br />
Verify that all processes are running properly using opcsv <strong>and</strong><br />
opcagt.<br />
3. Test server to agent communication using opcragt. Further test this<br />
communication by starting an application from the OVO application<br />
desktop on the target managed node.<br />
4. Test agent to server communication by sending test messages using<br />
opcmsg or any other mechanism generating an OVO message to be<br />
sent to the management server.<br />
5. Test configuration distribution (templates <strong>and</strong><br />
actions/comm<strong>and</strong>s/monitors).<br />
6. Wait for heartbeat-polling cycles to expire, no errors should be<br />
reported. If the OVO agent has been stopped, the associated HBP<br />
errors should be displayed.<br />
7. Everything should behave as usual. There should be no error<br />
messages in the OVO GUI or in the opcerror log files on either the<br />
managed node or the management server.<br />
Check the opcerror log files on both managed node <strong>and</strong><br />
management server for applicable entries.<br />
8. To test an external client port configuration file, enable tracing (Area<br />
ALL, DEBUG <strong>and</strong> debug area COMM, CONF) <strong>and</strong> use opcragt to make<br />
sure that the target nodes contacted with opcragt are matched<br />
correctly. Watch for entries as shown in the tracing section.<br />
Chapter 5 149
DCE RPC Communication without Using Endpoint Mappers<br />
Troubleshooting<br />
150<br />
Chapter 5
A Generic OVO Variables <strong>and</strong><br />
Troubleshooting<br />
Appendix A 151
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
152<br />
This appendix describes the variables used in setting up <strong>and</strong> configuring<br />
both HTTPS <strong>and</strong> DCE agents in a firewall environment. There are also<br />
some guides on how to troubleshoot problems in firewalls.<br />
Appendix A
Port Usage<br />
General Notes on Port Usage<br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
In the OVO environment, there are the following types of communication<br />
that use ports.<br />
❏ RPC Servers (DCE agents only)<br />
❏ RPC Clients (DCE agents only)<br />
❏ TCP Socket Connection (DCE agents only)<br />
❏ HTTP Servers (DCE <strong>and</strong> HTTPS agents)<br />
❏ HTTP Clients (DCE <strong>and</strong> HTTPS agents)<br />
❏ HTTPS Servers (HTTPS agents only)<br />
❏ HTTPS Clients (HTTPS agents only)<br />
RPC Servers<br />
An RPC Server is registered at one fixed port. It can h<strong>and</strong>le multiple<br />
incoming connections on this one port. A connection stays in the<br />
ESTABLISHED state for about 20 seconds <strong>and</strong> can be re-used during this<br />
time. Afterwards the connection disappears from the RPC Server side.<br />
RPC Clients<br />
An RPC Client uses one port in an assigned range for outgoing<br />
communication. A connection stays in the ESTABLISHED state for about<br />
20 seconds <strong>and</strong> can be re-used for more communication to the same<br />
target during this time. Afterwards the connection stays in the<br />
TIME_WAIT state for about one minute. During this time the port is<br />
blocked <strong>and</strong> cannot be re-used. A new connection to the same target<br />
during this period will require an additional port.<br />
A connection to another target will require another port all events.<br />
Appendix A 153
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
154<br />
TCP Socket Connections<br />
Similar to an RPC connection. It has a Socket Server <strong>and</strong> a Client<br />
connected to it. The Socket Servers are the Communication Agent <strong>and</strong><br />
the Communication <strong>Manager</strong>. Contrary to an RPC connection, the<br />
connection stays in TIME_WAIT on the Socket Server side.<br />
Appendix A
Port Usage on the Management Server<br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
❏ Outgoing Communication<br />
There are two processes for outgoing communication:<br />
Request Sender<br />
The Request Sender is a DCE <strong>and</strong> HTTPS Client. It contacts the<br />
Endpoint Mapper <strong>and</strong> the Control Agent of all the agents. For<br />
these reasons, it might need to have a large range allocated to it.<br />
In the case of very short heartbeat polling intervals, the required<br />
range could be twice the number of nodes.<br />
Examples<br />
HTTPS agents:<br />
ovconfchg -ovrg server -ns bbc.http.ext.ovoareqsdr<br />
-set CLIENT_PORT 12006-12040<br />
DCE agents:<br />
ovconfchg -ovrg server -ns opc.ovoareqsdr -set<br />
OPC_COMM_PORT_RANGE 12006-12040<br />
NOTE If both HTTPS <strong>and</strong> DCE agents are behind a firewall, both<br />
setting must be configured.<br />
Remote Agent Tool (opcragt)<br />
The Remote Agent Tools (opcragt) is a DCE <strong>and</strong> HTTPS Client.<br />
It contacts the Endpoint Mapper <strong>and</strong> the Control Agent of all the<br />
agents. For these reasons, it might need to have a large range<br />
allocated to it. In the case of requests going out to all nodes<br />
(opcragt -status -all), the required range could be twice the<br />
number of nodes.<br />
Examples<br />
HTTPS agents:<br />
ovconfchg -ovrg server -ns bbc.http.ext.opcragt -set<br />
CLIENT_PORT 12006-12040<br />
DCE agents:<br />
ovconfchg -ovrg server -ns opc.opcragt -set<br />
OPC_COMM_PORT_RANGE 12006-12040<br />
Appendix A 155
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
156<br />
<strong>Configuration</strong> Deployment to HTTPS Nodes Tool (opcbbcdist)<br />
The opcbbcdist tool can h<strong>and</strong>le 10 parallel configuration<br />
deployment requests by default. It can be enhanced by using the<br />
comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns opc -set \<br />
OPC_MAX_DIST_REQS <br />
Therefore, a port range of at least 10 should be chosen. This can<br />
be set using the comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns bbc.http.ext.opcbbcdist<br />
-set CLIENT_PORT <br />
If too small a port range is chosen, errors of the following type<br />
are displayed:<br />
(xpl-0) connect() to ":" failed.<br />
(RTL-226) Address already in use..<br />
Agent-Patch <strong>and</strong> -Upgrade Installation<br />
HTTPS communication is used for all communication, including<br />
patching <strong>and</strong> upgrading of agent software. Since this task is done<br />
in series, only a small port range is required:<br />
ovconfchg -ovrg server -ns bbc.http.ext.depl.ovdeploy<br />
-set CLIENT_PORT <br />
Certificate Deployment to HTTPS Agents<br />
Excluding manual certificate installation, for all other cases, a<br />
certificate request is sent from the agent to server. When the<br />
certificate is granted, the server sends the signed certificate back<br />
to the agent. For this server to agent communication, the client<br />
port range can be specified as follows:<br />
ovconfchg -ovrg server -ns bbc.http.ext.sec.cm.ovcs<br />
-set CLIENT_PORT <br />
One port is normally sufficient for this communication, as<br />
simultaneous certificate requests from agents are not possible.<br />
If too small a port range is chosen, the following message is<br />
printed to System.txt or stdout:<br />
(xpl-0) connect() to ":" failed.<br />
(RTL-226) Address already in use.<br />
Appendix A
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
These will send out the following communication requests:<br />
Heartbeat polling<br />
Agent requests from the GUI<br />
Applications from the application bank<br />
<strong>Configuration</strong> distribution<br />
Remote agent requests (start, stop, status)<br />
Since the outgoing communication goes out to several different<br />
systems, the connections can not normally be re-used. Instead, an<br />
established connection to one agent system will block a port for a<br />
communication to a different system. Since the Request Sender is a<br />
multi-threaded application with many threads initiating<br />
communication to agents, it is not possible to h<strong>and</strong>le, correctly, all<br />
port restriction related communication issues.<br />
In the event of these communication issues, a special error message<br />
is written to the System.txt file. The communication issues could<br />
result in:<br />
Wrong messages about agents being down<br />
Lost action requests<br />
Lost distribution requests<br />
Because of these effects, the port range for outgoing communication<br />
on the server must be large enough.<br />
Error messages in the System.txt file about the port range being too<br />
small are serious <strong>and</strong> the range must be increased.<br />
NOTE In the example settings, there are two different port ranges for the<br />
outgoing communication processes Request Sender (12006-12040)<br />
<strong>and</strong> Remote Agent Tool (12041-12050). This has the advantage that<br />
excessive use of the opcragt comm<strong>and</strong> will not influence the Request<br />
Sender’s communication. The disadvantage is that a larger range has<br />
to be opened on the firewall.<br />
Appendix A 157
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
158<br />
Distribution Adapter (opcbbcdist)<br />
opcbbcdist controls the configuration deployment to HTTPS nodes.<br />
The deployer is used for policy <strong>and</strong> instrumentation deployment.<br />
Installation/Upgrade/Patch Tool (ovdeploy)<br />
The ovdeploy tool can be used to list the installed OpenView products<br />
<strong>and</strong> components. The following three levels of information can be<br />
displayed:<br />
Basic inventory<br />
Detailed inventory<br />
Native inventory<br />
Fore more detailed information, refer to the HTTPS Agent <strong>Concepts</strong> <strong>and</strong><br />
<strong>Configuration</strong> Guide.<br />
Certificate Server (ovcs)<br />
For server-based HTTPS agent installation, ovcs is the server extension<br />
that h<strong>and</strong>les certificate requests, <strong>and</strong> is controlled by ovcd.<br />
Communication Utility (bbcutil)<br />
The bbcutil comm<strong>and</strong> is used to control the OV Communication Broker<br />
<strong>and</strong> is an important troubleshooting tool.<br />
For syntax information <strong>and</strong> details of how to use this tool, refer to the<br />
bbcutil(1) man page.<br />
Display <strong>Manager</strong> (12000)<br />
The Display <strong>Manager</strong> is an RPC Server <strong>and</strong> can be forced to one port. It<br />
is bound to a port <strong>and</strong> does not communicate with agents. It can be safely<br />
ignored in a the firewall environment.<br />
Message Receiver (12001)<br />
The Message Receiver is an RPC Server <strong>and</strong> can be forced to one port.<br />
Distribution <strong>Manager</strong> (12002)<br />
The Distribution <strong>Manager</strong> is an RPC Server <strong>and</strong> can be forced to one<br />
port.<br />
Appendix A
Communication <strong>Manager</strong> (12003)<br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
The Communication <strong>Manager</strong> is a Socket Server <strong>and</strong> can be forced to one<br />
port.<br />
Forward <strong>Manager</strong> (12004-12005)<br />
The Forward <strong>Manager</strong> is an RPC Client. It contacts the Endpoint<br />
Mapper <strong>and</strong> the Message Receiver. This requires two ports.<br />
Request Sender (12006-12040)<br />
The Request Sender is an RPC Client. It contacts the Endpoint Mapper<br />
<strong>and</strong> the Control Agent of all the agents. For these reasons, it might need<br />
to have a large range allocated to it. In the case of very short heartbeat<br />
polling intervals, the required range could be twice the number of nodes.<br />
Remote Agent Tool (12041-12050)<br />
The Remote Agent Tools (opcragt) is an RPC Client. It contacts the<br />
Endpoint Mapper <strong>and</strong> the Control Agent of all the agents. For these<br />
reasons, it might need to have a large range allocated to it. In the case of<br />
requests going out to all nodes (opcragt -status -all), the required<br />
range could be twice the number of nodes.<br />
TCP Socket Server (12051-12060)<br />
The TCP Socket Server is a Socket Server. Multiple instances can run in<br />
parallel, the maximum number can be configured in the administrator<br />
GUI (Node Bank: Actions -> Server -> Configure... -> Parallel<br />
Distribution). The specified range must be at least as large as that<br />
number. For distribution requests to a larger number of nodes, this range<br />
must be larger.<br />
The agents outside the firewall can be configured to use a different<br />
distribution mechanism, so this range does not need to be opened on the<br />
firewall.<br />
NT Virtual Terminal (12061)<br />
The NT Virtual Terminal server process is a Socket Server. It can be<br />
forced to one port. Two clients will connect to this socket when a terminal<br />
connection is established. After closing, these connections will stay in<br />
TIME_WAIT on the agent side. On the server side, the port can be<br />
re-used immediately.<br />
Appendix A 159
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Port Usage<br />
160<br />
If multiple NT Virtual Terminals should be run in parallel, the port<br />
range for this process must be increased.<br />
Appendix A
Troubleshooting Problems<br />
Defining the Size of the Port Range<br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
The example settings that are described in “Port Usage” on page 153 are<br />
only starting points for the installation of OVO in a <strong>Firewall</strong><br />
environment. The actual size of the management server’s port range<br />
cannot be given since it depends on several user defined parameters, of<br />
which the following are examples:<br />
❏ Number of nodes<br />
❏ Number of nodes using DCE/TCP or HTTPS as communication type<br />
❏ Heartbeat polling interval<br />
❏ Number of outgoing agent requests (applications, remote status, etc.)<br />
Because of this, the System.txt file has to be monitored for error<br />
messages as described in “Error Messages for Server Port H<strong>and</strong>ling” on<br />
page 196. If there are error messages about the port range being too<br />
small, one of the following actions should be executed:<br />
❏ Increase the size of the port range.<br />
❏ Increase the heartbeat polling interval of the nodes using TCP as<br />
communication type.<br />
❏ Turn on Agent Sends Alive Packets for nodes located inside the<br />
firewall. See “Agent Sends Live Packets” on page 31.<br />
Appendix A 161
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
162<br />
Monitoring Nodes Inside <strong>and</strong> Outside the <strong>Firewall</strong><br />
In many environments there is one OVO management server that<br />
monitors many nodes inside the firewall <strong>and</strong> a small number of nodes<br />
outside the firewall. This may require a large number of ports to be<br />
opened up over the firewall because the nodes inside also use the defined<br />
port range. Here are some hints to avoid this:<br />
❏ Switch as many nodes as possible located inside the firewall to<br />
DCE/UDP. This will avoid them blocking ports in the range for long<br />
periods as the UDP will not keep the connections open.<br />
❏ Turn on Agent Sends Alive Packets for all nodes inside the<br />
firewall. This will also avoid these nodes getting polled as they report<br />
their health state on their own.<br />
If only HTTPS agents are outside the firewall, an HTTP proxy should<br />
be used. All communication between server <strong>and</strong> agents will pass<br />
through the proxy. Therefore, the outgoing ports of this proxy must<br />
be opened in the firewall. There is no need to limit the port ranges of<br />
the OVO agent server processes.<br />
Various Agent Messages<br />
Sometimes, in the browser, messages arrive concerning agents being<br />
down <strong>and</strong> after a short time they are reported running again because of<br />
port access problems. If the port range is not large enough these<br />
messages will be almost continuous even though the agent is appears to<br />
be running continuously.<br />
See “Defining the Size of the Port Range” on page 161.<br />
Appendix A
Network Tuning for <strong>HP</strong>-UX 10.20<br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
Over time netstat might report TCP connections left in state<br />
FIN_WAIT_2. These are never closed <strong>and</strong> fill up the system.<br />
To overcome this problem, the FIN_WAIT_2 timer should be turned on<br />
using set_fin_time.<br />
This is a known DCE problem described in SR # 1653144972:<br />
*** PROBLEM TEXT ***<br />
There are cases where we can get FIN_WAIT_2 connections that<br />
never go away. We need a timer that customers can set to<br />
remove these connections.<br />
*** FIX TEXT ***<br />
Functionality has been added to the transport to allow<br />
customers to turn FIN_WAIT_2 timer on. The default is OFF.<br />
Customers need a new script that turns this timer ON <strong>and</strong> sets<br />
it to customer defined time. This functionality will be in<br />
every release or patch dated after 12/01/95.<br />
*** ADDITIONAL INFO ***<br />
This timer is tcp_fin_wait_timer <strong>and</strong> was introduced in patch<br />
PHNE_6586 (800 9.04). You also need the ‘unsupported’<br />
script, which is called set_fin_time to actually set the<br />
timer to something other than the default (no timeout).<br />
Using the script will not clear any sockets already ‘stuck’,<br />
only sockets created after the timer has been set.<br />
To get the script to set the timer, contact the <strong>HP</strong> Response Center to get<br />
it from:<br />
http://ovweb.bbn.hp.com/suc/hp/htdocs \<br />
/ito/database/networking/set_fin_time<br />
The timer will need to be reset after every reboot <strong>and</strong> before the OVO<br />
server processes are started. For example, set_fin_time -t 1200 will<br />
cause all TCP connections in FIN_WAIT_2 state to be closed after 10<br />
minutes.<br />
NOTE The timer removing connections which are hanging in FIN_WAIT_2,<br />
breaks RFC793. This is the reason why the timer will NOT be supported.<br />
Appendix A 163
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
164<br />
Network Tuning for <strong>HP</strong>-UX 11.x<br />
<strong>HP</strong>-UX 11.0 introduces the ndd(1M) tool to tune network parameters.<br />
❏ tcp_time_wait_interval<br />
This defines how long a stream persists in TIME_WAIT. The interval<br />
is specified in milliseconds. The default is 60000 (1 minute). This<br />
allows to decrease the time a connection stays in TIME_WAIT to one<br />
second.<br />
Get the current value:<br />
# ndd -get /dev/tcp tcp_time_wait_interval<br />
Set the value to 1 second:<br />
# ndd -set /dev/tcp tcp_time_wait_interval 1000<br />
❏ tcp_fin_wait_2_timeout<br />
This parameter sets the timer to stop idle FIN_WAIT_2 connections.<br />
It specifies an interval, in milliseconds, after which the TCP will be<br />
unconditionally killed. An appropriate reset segment will be sent<br />
when the connection is killed. The default timeout is 0, which allows<br />
the connection to live forever, as long as the far side continues to<br />
answer keepalives.<br />
Get the current value (0 is turned off):<br />
# ndd -get /dev/tcp tcp_fin_wait_2_timeout<br />
Set the value to 10 minutes:<br />
# ndd -set /dev/tcp tcp_fin_wait_2_timeout 6000000<br />
NOTE The timeout value is calculated as follows:<br />
(1000 ms) * (60 seconds) * (10 minutes) = 600000 ms.<br />
Appendix A
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
These settings need to be defined whenever the system is re-booted. To<br />
do this update /etc/rc.config.d/nddconf with the required<br />
parameter as shown in the following example:<br />
TRANSPORT_NAME[0]=tcp<br />
NDD_NAME[0]=tcp_time_wait_interval<br />
NDD_VALUE[0]=1000<br />
TRANSPORT_NAME[1]=tcp<br />
NDD_NAME[1]=tcp_fin_wait_2_timeout<br />
NDD_VALUE[1]=600000<br />
Appendix A 165
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
166<br />
Network Tuning for Solaris<br />
On Solaris the ndd(1M) tool exists to tune network parameters.<br />
❏ tcp_time_wait_interval<br />
This defines how long a stream persists in TIME_WAIT. The interval<br />
is specified in milliseconds. The default is 240000 (4 minutes). This<br />
allows to decrease the time a connection stays in TIME_WAIT to one<br />
second.<br />
Get the current value:<br />
ndd -get /dev/tcp tcp_time_wait_interval<br />
Set the value to 1 second:<br />
ndd -set /dev/tcp tcp_time_wait_interval 1000<br />
❏ tcp_fin_wait_2_flush_interval<br />
This parameter sets the timer to stop idle FIN_WAIT_2 connections.<br />
It specifies an interval, in milliseconds, after which the TCP<br />
connection will be unconditionally killed. An appropriate reset<br />
segment will be sent when the connection is killed. The default<br />
timeout is 675000 (~11 minute).<br />
To obtain the current value (0 is turned off):<br />
ndd -get /dev/tcp tcp_fin_wait_2_flush_interval<br />
Set the value to 10 minutes:<br />
ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 6000000<br />
NOTE The timeout value is calculated as follows:<br />
(1000 ms) * (60 seconds) * (10 minutes) = 600000 ms.<br />
None of these settings will survive a reboot, <strong>and</strong> by default there is no<br />
configuration file where they can easily be specified. Therefore it’s<br />
recommended to add these settings to /etc/rc2.d/S69inet.<br />
Appendix A
Tracing of the <strong>Firewall</strong><br />
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Troubleshooting Problems<br />
In case of communication problems <strong>and</strong> after checking if they are caused<br />
by all the ports being used, it is recommended to trace the firewall <strong>and</strong><br />
check what gets blocked or rejected here. In case, OVO communication<br />
gets blocked here, it seems like the port ranges of the OVO configuration<br />
<strong>and</strong> the firewall configuration do not match.<br />
Refer to the firewall documentation to see how the tracing is used.<br />
Appendix A 167
Generic OVO Variables <strong>and</strong> Troubleshooting<br />
Links<br />
168<br />
Links<br />
The following web page contains additional White Papers on firewall<br />
configurations for other <strong>HP</strong> OpenView products:<br />
http://www.openview.hp.com/library/papers/<br />
White Papers for the following products are available:<br />
❏ Network Node <strong>Manager</strong><br />
Managing Your Network Through <strong>Firewall</strong>s<br />
❏ Performance<br />
<strong>Firewall</strong> <strong>Configuration</strong> for <strong>HP</strong> OpenView Performance <strong>Manager</strong>,<br />
Performance Agent, Reporter<br />
❏ Reporter<br />
<strong>Firewall</strong> <strong>Configuration</strong> for <strong>HP</strong> OpenView Performance <strong>Manager</strong>,<br />
Agent <strong>and</strong> Reporter<br />
Appendix A
B OVO Variables <strong>and</strong><br />
Troubleshooting for HTTPS<br />
Managed Nodes<br />
Appendix B 169
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
170<br />
This appendix describes the variables used in setting up <strong>and</strong> configuring<br />
OVO HTTPS agents in a firewall environment. There are also some<br />
guides on how to troubleshoot problems in firewalls.<br />
Appendix B
<strong>Configuration</strong> Examples<br />
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
<strong>Configuration</strong> Examples<br />
A firewall rule configuration may be presented as displayed in Table B-1.<br />
Table B-1 Example of a <strong>Firewall</strong> Rule<br />
Source Destination Protocol Source Port<br />
MGMT SRV HTTPS<br />
NODE<br />
HTTPS<br />
NODE<br />
The firewall configuration file may appear as displayed Example B-1<br />
below:<br />
Example B-1 Example <strong>Firewall</strong> <strong>Configuration</strong> File<br />
accept tcp from 10.136.120.163 port *<br />
to 192.168.1.* port 383<br />
In this instance 10.136.120.163 is the management server’s address<br />
<strong>and</strong> 192.168.1.* is the managed node’s address.<br />
Port Usage on Managed Nodes<br />
TCP ANY,<br />
Configurable<br />
MGMT SRV TCP ANY,<br />
Configurable<br />
Table B-2 specifies the managed node communication ports.<br />
Table B-2 Managed Node Communication Port Settings<br />
Table B-3 specifies the console communication ports.<br />
Destination<br />
Port<br />
Appendix B 171<br />
383<br />
383<br />
Agent Type Communication Type Port Range<br />
ovbbccb HTTPS Server 383<br />
Message Agent HTTPS Client ANY, Configurable
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
<strong>Configuration</strong> Examples<br />
Table B-3 Console Communication Port Settings<br />
172<br />
Agent Type Communication Type Port Range<br />
Reporter (3.5) HTTPS Client ANY, Configurable<br />
Performance <strong>Manager</strong><br />
(4.0.5)<br />
HTTPS Client ANY, Configurable<br />
Appendix B
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
OVO Variables Used with HTTPS Agents <strong>and</strong> <strong>Firewall</strong>s<br />
OVO Variables Used with HTTPS Agents <strong>and</strong><br />
<strong>Firewall</strong>s<br />
The following variables can be set for use in firewall environments:<br />
❏ SERVER_PORT<br />
❏ SERVER_BIND_ADDR<br />
❏ CLIENT_PORT<br />
❏ CLIENT_BIND_ADDR<br />
❏ PROXY<br />
bbc.http is HTTP namespace for node-specific configuration. The<br />
common parameters are introduced below. For more detailed<br />
information, refer to the HTTPS Agent <strong>Concepts</strong> <strong>and</strong> <strong>Configuration</strong><br />
Guide <strong>and</strong> the bbc.ini file at the following location:<br />
/opt/OV/misc/XPL/config/defaults/bbc.ini<br />
NOTE For application-specific settings, see the section bbc.http.ext.*.<br />
Application-specific settings in bbc.http.ext.* override node-specific<br />
settings in bbc.http.<br />
SERVER_PORT<br />
Used for ovbbccb. By default this port is set to 0. If set to 0, the<br />
operating system assigns the first available port number. This is the port<br />
used by the application to listen for requests.<br />
SERVER_BIND_ADDR<br />
Used for ovbbccb. Bind address for the server port. Default is<br />
localhost.<br />
Appendix B 173
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
OVO Variables Used with HTTPS Agents <strong>and</strong> <strong>Firewall</strong>s<br />
174<br />
CLIENT_PORT<br />
Bind port for client requests. This may also be a range of ports, for<br />
example 10000-10020. This is the bind port on the originating side of a<br />
request. Default is port 0. The operating system will assign the first<br />
available port.<br />
Note that MS Windows systems do not immediately release ports for<br />
reuse. Therefore on MS Windows systems, this parameter should be a<br />
large range.<br />
CLIENT_BIND_ADDR<br />
Bind address for the client port. Default is INADDR_ANY.<br />
PROXY<br />
Defines which proxy <strong>and</strong> port to use for a specified hostname.<br />
Format:<br />
proxy:port +(a)-(b);proxy2:port2+(a)-(b); ...;<br />
a: list of hostnames separated by a comma or a semicolon, for which this<br />
proxy shall be used.<br />
b: list of hostnames separated by a comma or a semicolon, for which the<br />
proxy shall not be used.<br />
The first matching proxy is chosen.<br />
It is also possible to use IP addresses instead of hostnames so 15.*.*.*<br />
or 15:*:*:*:*:*:*:* would be valid as well, but the correct number of<br />
dots or colons MUST be specified. IP version 6 support is not currently<br />
available but will be available in the future.<br />
Appendix B
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
HTTPS Managed Node Variables<br />
HTTPS Managed Node Variables<br />
The following variables can be set with the ovconfchg tool for use in a<br />
firewall environment that includes HTTP-based communication<br />
components:<br />
❏ CLIENT_BIND_ADDR<br />
❏ CLIENT_PORT<br />
❏ PROXY<br />
❏ SERVER_BIND_ADDR<br />
❏ PORT<br />
CLIENT_BIND_ADDR<br />
Usage HTTP client<br />
Values <br />
Default not set<br />
Sets the IP address for the specified application’s OpenView HTTP<br />
client. See also “Systems with Multiple IP Addresses” on page 61.<br />
Examples<br />
All clients:<br />
ovconfchg -ns bbc.http -set CLIENT_BIND_ADDR 10.10.10.1O<br />
opcmsga only:<br />
ovconfchg -ns bbc.http.ext.opcmsga -set \<br />
CLIENT_BIND_ADDR 10.10.10.1O<br />
CLIENT_PORT<br />
Usage HTTP client<br />
Values <br />
Default Any<br />
Appendix B 175
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
HTTPS Managed Node Variables<br />
176<br />
Sets the port number or a range of ports for the specified application’s<br />
OpenView HTTP client.<br />
Examples<br />
All clients:<br />
ovconfchg -ns bbc.http -set CLIENT_PORT 14000-14010<br />
opcmsga only:<br />
ovconfchg -ns bbc.http.ext.opcmsga -set \<br />
CLIENT_PORT 15000-15005<br />
PROXY<br />
Usage HTTP client<br />
Values proxy:port +(a)-(b); proxy2:port2 +(c)-(d); …<br />
Default not set<br />
Sets the proxy to be used to contact the specified target node.<br />
The format is proxy:port +(a)-(b); proxy2:port2 +(c)-(d); <strong>and</strong> so<br />
on. The variables a, b, c <strong>and</strong> d are comma separated lists of hostnames,<br />
networks, <strong>and</strong> IP addresses that apply to the target nodes. Multiple<br />
proxies may be defined for one PROXY key. “-” before the list indicates that<br />
those entities do not use this proxy, “+” before the list indicates that<br />
those entities do use this proxy. The first matching proxy is used.<br />
Examples:<br />
PROXY web-proxy:8088<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
target node.<br />
PROXY web-proxy:8088-(*.ext.com)<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
target node (*) except hosts that match *ext.com, for example, except for<br />
karotte.ext.com.<br />
web-proxy:8088+(*.*.ext.com)-(*.subnet1.ext.com);proxy2:8089<br />
+(*)-(*.int.com)<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
target node (*) that matches *.*.ext.com except hosts from subnet<br />
*.subnet1.ext.com. If the first proxy does not match, then the second<br />
proxy will be tried with port 8089. If the second proxy does not match,<br />
then no proxy will be used.<br />
Appendix B
SERVER_BIND_ADDR<br />
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
HTTPS Managed Node Variables<br />
Usage ovbbccb<br />
Values <br />
Default not set<br />
Sets the IP address for the specified application’s OpenView<br />
Communication Broker. See also “Systems with Multiple IP Addresses”<br />
on page 61.<br />
Example:<br />
SERVER_BIND_ADDR 10.10.10.10<br />
Appendix B 177
OVO Variables <strong>and</strong> Troubleshooting for HTTPS Managed Nodes<br />
HTTPS Managed Node Variables<br />
178<br />
Appendix B
C OVO Variables <strong>and</strong><br />
Troubleshooting <strong>and</strong> DCE<br />
Managed Nodes<br />
Appendix C 179
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
180<br />
This appendix describes the variables used in setting up <strong>and</strong> configuring<br />
OVO DCE agents in a firewall environment. There are also some guides<br />
on how to troubleshoot problems in firewalls.<br />
Appendix C
<strong>Configuration</strong> Examples<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
<strong>Configuration</strong> Examples<br />
A firewall rule configuration may be presented as displayed in Table C-1.<br />
Table C-1 Example of a <strong>Firewall</strong> Rule<br />
Source Destination Protocol Source Port<br />
MGMT SRV DCE NODE TCP 12006-12040<br />
12041-12050<br />
MGMT SRV DCE NODE TCP 12006-12040 13001<br />
13007<br />
Destination<br />
Port<br />
The firewall configuration file may appear as displayed Example C-1<br />
below:<br />
Example C-1 Example <strong>Firewall</strong> <strong>Configuration</strong> File<br />
accept tcp from 10.136.120.163 port 12006-12040<br />
to 192.168.1.3 port 135<br />
accept tcp from 10.136.120.163 port 12041-12050<br />
to 192.168.1.3 port 135<br />
accept tcp from 10.136.120.163 port 12006-12040<br />
to 192.168.1.3 port 13001<br />
accept tcp from 10.136.120.163 port 12006-12040<br />
to 192.168.1.3 port 13007<br />
accept tcp from 10.136.120.163 port 12041-12050<br />
to 192.168.1.3 port 13001<br />
Description<br />
135 Endpoint map<br />
Control agent<br />
Communication<br />
agent<br />
MGMT SRV DCE NODE TCP 12041-12050 13001 Control agent<br />
In this instance 10.136.120.163 is the management server’s address<br />
<strong>and</strong> 192.168.1.3 is the managed node’s address.<br />
NOTE Within this example, the first two rules for the ranges 12006-12040 <strong>and</strong><br />
12041-12050 can be combined into one rule because they follow each<br />
other sequentially. But for a clearer underst<strong>and</strong>ing they appear as<br />
separate rules.<br />
Appendix C 181
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
OVO Variables Used with DCE Agents <strong>and</strong> <strong>Firewall</strong>s<br />
182<br />
OVO Variables Used with DCE Agents <strong>and</strong><br />
<strong>Firewall</strong>s<br />
The following variables can be set for use in firewall environments:<br />
❏ OPC_AGENT_NAT<br />
❏ OPC_COMM_PORT_RANGE<br />
❏ OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME<br />
❏ OPC_DIST_MODE<br />
❏ OPC_MAX_PORT_RETRIES<br />
❏ OPC_RESTRICT_TO_PROCS<br />
❏ OPC_RPC_ONLY<br />
See Table C-2 for a list of locations of the opc[sv]info file.<br />
Table C-2 Location of the opcinfo File<br />
Platform Location<br />
<strong>HP</strong>-UX /opt/OV/bin/OpC/install/opcinfo<br />
Solaris /opt/OV/bin/OpC/install/opcinfo<br />
AIX /usr/lpp/OV/OpC/install/opcinfo<br />
UNIX /opt/OV/bin/OpC/install/opcinfo<br />
Windows :\usr\OV\bin\OpC\install\opcinfo<br />
Appendix C
OPC_AGENT_NAT<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
OVO Variables Used with DCE Agents <strong>and</strong> <strong>Firewall</strong>s<br />
Usage Agent<br />
Values TRUE | FALSE<br />
Default FALSE<br />
OVO configuration distribution usually checks that the configured IP<br />
address is a valid address on this system before requesting the<br />
configuration data from the management server. This causes the<br />
distribution in a NAT environment to fail because the configured IP<br />
address does not usually exist on the system. By setting this flag to TRUE,<br />
the distribution uses only the data for the IP address as configured in<br />
OPC_IP_ADDRESS.<br />
OPC_COMM_PORT_RANGE<br />
Usage Agent | Server<br />
Values <br />
Default none<br />
This variable defines the port(s) that may be used by the process for RPC<br />
communication. For RPC server processes it is sufficient to give a single<br />
port number. For RPC clients, a range of ports must be given.<br />
OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME<br />
Usage <strong>HP</strong>-UX Server<br />
Values <br />
Default none<br />
This setting configures the setting of the <strong>HP</strong>DCE_CLIENT_DISC_TIME<br />
environment variable for the DCE environment. It specifies an interval,<br />
in seconds, after which the TCP will go from ESTABLISHED to TIME_WAIT.<br />
See “Communication Issues with NT Nodes” on page 200 for details.<br />
Appendix C 183
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
OVO Variables Used with DCE Agents <strong>and</strong> <strong>Firewall</strong>s<br />
184<br />
OPC_DIST_MODE<br />
Usage Agent<br />
Values DIST_TCPSOCK | DIST_RPC<br />
Default DIST_TCPSOCK<br />
OVO configuration distribution by default will use a TCP socket<br />
connection to send the actual data. This causes an additional TCP<br />
connection to be opened from the agent to the management server. Since<br />
this is not an RPC connection, it does not honor the setting of the<br />
RPC_RESTRICTED_PORTS environment variable.<br />
By setting this flag to DIST_RPC, the distribution data will be sent in an<br />
RPC call.<br />
NOTE This might cause more traffic in bad or slow network environments when<br />
UDP is used (NCS or DCE/UDP is configured as communication type).<br />
OPC_MAX_PORT_RETRIES<br />
Usage Agent | Server<br />
Values <br />
Default 13<br />
In the case of a communication issue where all ports in the allowed range<br />
are in use, there is a retry mechanism implemented. Each attempt waits<br />
5 seconds before retrying the connection. This setting gives the number<br />
of retries before the communication is aborted.<br />
The default value of 13 causes a 65 second delay. Since connections stay<br />
in the TIME_WAIT state for 60 seconds (default on <strong>HP</strong>-UX), this will wait<br />
until a connection is cleared.<br />
Setting this to 0 will disable the retry mechanism.<br />
Appendix C
OPC_RESTRICT_TO_PROCS<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
OVO Variables Used with DCE Agents <strong>and</strong> <strong>Firewall</strong>s<br />
Usage Agent<br />
Values <br />
Default none<br />
This flag marks all subsequent entries in opcinfo to be valid for the<br />
given process only. This is true for all the lines following until the next<br />
occurrence of OPC_RESTRICT_TO_PROCS or the end of file.<br />
This is used to set different values for the same OVO configuration<br />
variable, for example, OPC_COMM_PORT_RANGE.<br />
For an example on the usage, see “Configuring the OVO Management<br />
Server” on page 73.<br />
If you need to make process-specific settings on the OVO management<br />
server, use the following comm<strong>and</strong>:<br />
ovconfchg -ovrg server -ns opc. -set \<br />
<br />
OPC_RPC_ONLY<br />
Usage Agent<br />
Values TRUE | FALSE<br />
Default FALSE<br />
The first thing to be checked when initiating communication to the<br />
management server, is whether the system is up <strong>and</strong> running <strong>and</strong> if the<br />
endpoint mapper is running. This is done using ICMP <strong>and</strong> simple UDP<br />
communication. In case the system is down this communication is less<br />
expensive than a failing RPC call.<br />
Since in firewall environments this communication is usually blocked at<br />
the firewall, it can be turned off by setting this flag to TRUE.<br />
Appendix C 185
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Managed Node Variables<br />
186<br />
Managed Node Variables<br />
The following variables can be set in the nodeinfo file for use in a<br />
firewall environment that includes HTTP-based communication<br />
components:<br />
❏ CLIENT_BIND_ADDR()<br />
❏ CLIENT_PORT()<br />
❏ PROXY<br />
❏ SERVER_BIND_ADDR()<br />
❏ SERVER_PORT()<br />
The nodeinfo file is located in the following directories on the managed<br />
nodes:<br />
AIX: /var/lpp/OV/conf/OpC/nodeinfo<br />
UNIX: /var/opt/OV/conf/OpC/nodeinfo<br />
Windows: :\usr\OV\conf\OpC\nodeinfo<br />
CLIENT_BIND_ADDR()<br />
Usage HTTP client (Reporter <strong>and</strong>/or Performance <strong>Manager</strong>)<br />
Values <br />
Default not set<br />
Sets the IP address for the specified application’s OpenView HTTP<br />
client. Currently the only valid application name is<br />
com.hp.openview.CodaClient. See also “Systems with Multiple IP<br />
Addresses” on page 61.<br />
Example:<br />
CLIENT_BIND_ADDR(com.hp.openview.CodaClient) 10.10.10.1O<br />
Appendix C
CLIENT_PORT()<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Managed Node Variables<br />
Usage HTTP client (Reporter <strong>and</strong>/or Performance <strong>Manager</strong>)<br />
Values <br />
Default not set<br />
Sets the port number or a range of ports for the specified application’s<br />
OpenView HTTP client. Currently the only valid application name is<br />
com.hp.openview.CodaClient.<br />
Examples:<br />
CLIENT_PORT(com.hp.openview.CodaClient) 14000-14003<br />
PROXY<br />
Usage HTTP client (Reporter <strong>and</strong>/or Performance <strong>Manager</strong>)<br />
Values proxy:port +(a)-(b); proxy2:port2 +(c)-(d); …<br />
Default not set<br />
Sets the proxy for any OpenView HTTP clients running on the computer.<br />
Clients can be Reporter or Performance <strong>Manager</strong>.<br />
The format is proxy:port +(a)-(b); proxy2:port2 +(c)-(d); <strong>and</strong> so<br />
on. The variables a, b, c <strong>and</strong> d are comma separated lists of hostnames,<br />
networks, <strong>and</strong> IP addresses that apply to the proxy. Multiple proxies may<br />
be defined for one PROXY key. “-” before the list indicates that those<br />
entities do not use this proxy, “+” before the list indicates that those<br />
entities do use this proxy. The first matching proxy is used.<br />
Examples:<br />
PROXY web-proxy:8088<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
server.<br />
PROXY web-proxy:8088-(*.bbn.hp.com)<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
server (*) except hosts that match *bbn.hp.com, for example, except for<br />
karotte.bbn.hp.com.<br />
Appendix C 187
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Managed Node Variables<br />
188<br />
web-proxy:8088+(20.120.*.*)-(20.120.20.*);proxy2:8089+(*)<br />
-(*.hp.com)<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
server (*) that matches the IP address 20.120 except hosts that match<br />
20.120.20. If the first proxy does not match, then the second proxy will<br />
be tried with port 8089. If the second proxy does not match, then no<br />
proxy will be used.<br />
PROXY web-proxy:8088-(*.hp.com)+(*.bbn.hp.com)<br />
Meaning: the proxy web-proxy will be used with port 8088 for every<br />
server (*) except hosts that match *.hp.com, for example, www.hp.com.<br />
The exception is hostnames that match *.bbn.hp.com. For example, for<br />
karotte.bbn.hp.com the proxy server will be used.<br />
SERVER_BIND_ADDR()<br />
Usage HTTP server (embedded performance component)<br />
Values <br />
Default not set<br />
Sets the IP address for the specified application’s OpenView HTTP<br />
server. Currently the only valid application name is<br />
com.hp.openview.Coda. See also<br />
Communication Types<br />
DCE/UDP Communication Type DCE/UDP can not be completely<br />
restricted to a port range. Since all platforms where DCE is available<br />
also offer DCE/TCP, it is recommended that this is used.<br />
If there is a need to use DCE/UDP, the DCE daemon (rpcd/dced) can be<br />
forced to use a specific port range only. This is done by setting the<br />
RPC_RESTRICTED_PORTS variable before starting the daemon in addition<br />
to the setting for the server or agent processes.<br />
NOTE Restricting the DCE daemon’s port range will have an effect on all<br />
applications that use RPC communications on that system. They all will<br />
share the same port range.<br />
Appendix C
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Managed Node Variables<br />
NCS Communication Type Since NCS uses additional ports to<br />
answer connection requests, the firewall has to be opened up for more<br />
NCS nodes. Table C-3 specifies the filter rules that must be followed.<br />
Table C-3 Filter Rules for NCS Node Ports<br />
Source Destination Protocol<br />
See “<strong>Configuration</strong> Distribution” on page 87 for notes on the distribution<br />
mechanism.<br />
Sun RPC Communication Type For Novell NetWare managed nodes,<br />
the communication type Sun RPC is used. Since on Sun RPC no port<br />
restriction is possible, the firewall will need to be opened up completely<br />
for communication between the managed node <strong>and</strong> the management<br />
server. The communication type TCP or UDP can be selected in the OVO<br />
Node Bank. For Sun RPC, the endpoint mapper is located on port 111. In<br />
case UDP is selected, see “<strong>Configuration</strong> Distribution” on page 87.<br />
NOTE It is not recommended to use Novell NetWare nodes in a firewall<br />
environment.<br />
.<br />
Source<br />
Port<br />
MGMT SRV NCS NODE UDP 12006-12040<br />
12041-12050<br />
Destination<br />
Port<br />
Description<br />
135 Endpoint map<br />
NCS NODE MGMT SRV UDP any 135 Endpoint map<br />
MGMT SRV NCS NODE UDP 12006-12040<br />
12041-12050<br />
NCS NODE MGMT SRV UDP any 12001<br />
any Control Agent<br />
12002<br />
12003<br />
Communication<br />
Agent<br />
Message Receiver<br />
Distribution<br />
<strong>Manager</strong><br />
Communication<br />
<strong>Manager</strong><br />
Appendix C 189
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Managed Node Variables<br />
190<br />
Example:<br />
SERVER_BIND_ADDR(com.hp.openview.Coda) 10.10.10.10<br />
SERVER_PORT()<br />
Usage HTTP server (embedded performance component)<br />
HTTP client (Reporter <strong>and</strong>/or Performance <strong>Manager</strong>)<br />
Values <br />
Default SERVER_PORT(com.hp.openview.Coda) 381<br />
SERVER_PORT(com.hp.openview.bbc.LLBServer)<br />
383<br />
Sets the port number for the specified application’s OpenView HTTP<br />
server. Currently the only valid application names are<br />
com.hp.openview.Coda <strong>and</strong> com.hp.openview.bbc.LLBServer.<br />
Example:<br />
SERVER_PORT(com.hp.openview.Coda) 381<br />
SERVER_PORT(com.hp.openview.bbc.LLBServer) 383<br />
Appendix C
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Port Usage on Managed Nodes<br />
Port Usage on Managed Nodes<br />
❏ RPC Server<br />
The managed node registers the Control Agent as RPC server. It<br />
h<strong>and</strong>les all incoming RPC calls. See “Control Agent (13001)” on<br />
page 191.<br />
❏ Socket Server<br />
In the case of a bulk transfer request from the Open Agent Interface,<br />
the Communication Agent is started as a Socket Server. See<br />
“Communication Agent (13007)” on page 192.<br />
❏ RPC Clients<br />
Outgoing communication is sent from the Distribution Agent <strong>and</strong><br />
from the Message Agent.<br />
The Distribution Agent can retrieve new configuration data using a<br />
special socket connection. This is disabled for firewalls as described<br />
in “<strong>Configuration</strong> Distribution” on page 87. See “Distribution Agent<br />
(13011-13013)” on page 192.<br />
The Message Agent can send bulk data to the server using the Open<br />
Agent Interface. In this case it will establish a direct socket<br />
connection to the Communication <strong>Manager</strong>. This can not be disabled.<br />
See “Message Agent (13004-13006)” on page 192.<br />
Usually there is only one target system for communication. The<br />
connections can be re-used after they have been established. Therefore it<br />
is possible to restrict the RPC clients to a small range of ports.<br />
In a multiple manager environment the port range for the Message<br />
Agent should be increased.<br />
The agent can h<strong>and</strong>le communication issues that are related to the port<br />
restriction. It will write a message to the System.txt file <strong>and</strong> retry the<br />
communication. This may cause delays but prevent message loss.<br />
Control Agent (13001)<br />
The Control Agent is an RPC Server <strong>and</strong> can be forced to use one port.<br />
Appendix C 191
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Port Usage on Managed Nodes<br />
192<br />
Distribution Agent (13011-13013)<br />
The Distribution Agent is an RPC Client. It contacts the Endpoint<br />
Mapper <strong>and</strong> the Distribution <strong>Manager</strong>. This needs two ports.<br />
Message Agent (13004-13006)<br />
The Message Agent is an RPC Client. It contacts the Endpoint Mapper<br />
<strong>and</strong> the Message Receiver. This needs two ports.<br />
In a flexible manager setup where the agent might report to different<br />
management servers the range should be increased so that two ports are<br />
available for each server.<br />
An extra port is needed for a socket connection to the Communication<br />
<strong>Manager</strong> when Bulk transfers are requested.<br />
Communication Agent (13007)<br />
The Communication Agent is a Socket Server <strong>and</strong> can be forced to one<br />
port.<br />
NT Virtual Terminal (13008-13009)<br />
The NT Virtual Terminal is a Socket Client connecting to the NT Virtual<br />
Terminal process running on the management server. It will open two<br />
socket connections. After closing, the connections on the NT side will stay<br />
in TIME_WAIT for several minutes <strong>and</strong> cannot be reused during this<br />
time. For this reason, the retry counter for the process needs to be<br />
increased to a much larger number than the default.<br />
This might cause a multi-minute delay when starting <strong>and</strong> stopping the<br />
NT Virtual Terminal repeatedly. To avoid this delay, the port range for<br />
the process has to be increased.<br />
Embedded Performance Component (14000-14003)<br />
Reporter <strong>and</strong> Performance <strong>Manager</strong> communicate with the embedded<br />
performance component via a protocol based on HTTP. To access data<br />
collected by the embedded performance component, ports for the HTTP<br />
server (embedded performance component) <strong>and</strong> the HTTP clients<br />
(Reporter <strong>and</strong>/or Performance <strong>Manager</strong>) need to be opened.<br />
Appendix C
Troubleshooting Problems<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
When All Assigned Ports Are in Use<br />
When all the assigned ports are in use, the Message <strong>and</strong> Distribution<br />
Agents will hold any incoming <strong>and</strong> outgoing messages. The agent will try<br />
to establish a link to a port at regular intervals. When the agent links to<br />
a port any messages held by the agents will be released. This might<br />
cause a short delay in communication requests.<br />
❏ The Distribution Agent can wait up to one minute for the required<br />
port(s) to be free.<br />
❏ The Message Agent can wait up to one minute for the required<br />
port(s) to be free.<br />
The worst case scenario is that all messages have a delay of more than a<br />
minute. The following example shows how the agents h<strong>and</strong>le the<br />
non-availability of assigned ports as shown in Figure C-1.<br />
Figure C-1 Message Buffering<br />
20 sec 1 min.*<br />
ESTABLISHED TIME_WAIT<br />
Msg 0<br />
Msg 1<br />
Msg 2<br />
In this scenario, multiple messages are sent while the connection to the<br />
management server is established. The time between these messages<br />
(Msg 0 <strong>and</strong> Msg 1) is less than 20 seconds. These messages are<br />
immediately forwarded to the server.<br />
Appendix C 193<br />
Msg 3<br />
Buffering<br />
* 1minute is the TIME_WAIT default on <strong>HP</strong>-UX<br />
ESTABLISHED
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
194<br />
There is a delay of more than 20 seconds before the next message<br />
(Msg 2). During this time, the connection was set to the TIME_WAIT state.<br />
Now a new connection to the server is required, but the old connection<br />
still blocks the port. The Message Agent goes into the buffering state <strong>and</strong><br />
retries to contact the server at regular intervals. Future incoming<br />
messages (Msg 3) are buffered until the connection is established again.<br />
After about one minute, the old connection in TIME_WAIT is cleaned <strong>and</strong><br />
the port is freed. It can take a few seconds before the next retry of the<br />
Message Agent contacting the server. As soon as the next connection is<br />
built, all buffered messages are sent.<br />
For Open Agent Bulk transfers, all three ports of the assigned Message<br />
Agent port range will be used. If they are in TIME_WAIT status from a<br />
previous communication, it might take a few minutes before the request<br />
can be made.<br />
NOTE The agent’s error h<strong>and</strong>ling causes it to action retries in the case of port<br />
range issues. This might cause MSI h<strong>and</strong>ling or local automatic actions<br />
to be delayed for some minutes. Those delays only happen when the<br />
assigned ports are currently in use while the agent tries to communicate<br />
to the server.<br />
Error Messages for Unavailable Ports<br />
Problem A<br />
When all assigned ports are in use, the following message is written to<br />
the System.txt file.<br />
The assigned port range for this process is currently in use.<br />
Increase range or expect delays or communication issues.<br />
(OpC20-173)<br />
Retry last action.....<br />
(OpC55-26)<br />
Solution A<br />
If the delay is acceptable, this error message can be ignored. Otherwise,<br />
the range per process must be increased.<br />
Appendix C
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
Problem B<br />
The following error messages show that the automatic h<strong>and</strong>ling of port<br />
range issues did not work. They might also indicate real communication<br />
issues, for example, networking problems or server processes not<br />
running:<br />
Checking server failed: Cannot bind socket (dce / rpc).<br />
(OpC20-106)<br />
The ITO Distribution Agent reported an unexpected error.<br />
(OpC30-1026)<br />
Communication failure to message receiver: Connection<br />
request rejected (dce / rpc). Buffering messages.<br />
(OpC30-3)<br />
Solution B<br />
Check that no network problems occur <strong>and</strong> ensure that all server<br />
processes are running.<br />
Appendix C 195
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
196<br />
When the Server Does not H<strong>and</strong>le Port Ranges<br />
Automatically<br />
Problem A<br />
The management server does not h<strong>and</strong>le port range related<br />
communication issues automatically. These are reported <strong>and</strong> unless<br />
corrected might cause:<br />
❏ Wrong messages about agents being down<br />
❏ Lost action requests<br />
❏ Lost distribution requests<br />
Solution A<br />
Because of this, you must ensure that the port range for outgoing<br />
communication is large enough. Error message in the System.txt file<br />
about the port range being too small must be taken seriously <strong>and</strong> the<br />
range should be increased.<br />
Error Messages for Server Port H<strong>and</strong>ling<br />
Problem A<br />
In the event of a port range related communication issue, the server<br />
prints the following message in the System.txt file.<br />
The assigned port range for this process is currently in use.<br />
Increase range or expect delays or communication issues.<br />
(OpC20-173)<br />
Solution A<br />
In some situations there is a retry, but since the Request Sender is a<br />
multi-threaded application, it cannot be guaranteed that the thread<br />
requesting a communication port will get the next available port.<br />
Usually the System.txt file shows if there is a retry.<br />
Appendix C
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
Problem B<br />
In the event of the retry failing, an error message similar to the following<br />
will be produced:<br />
Control agent on node karotte.hp.com isn’t accessible.<br />
(OpC40-405)<br />
Cannot perform configuration request for subagents (index<br />
ALL) on node karotte.hp.com.<br />
(OpC40-424)<br />
The ITO control agent isn’t running on the node.<br />
(OpC40-426)<br />
Cannot send request to subagents (index ITO) on node<br />
karotte.bbn.hp.com.<br />
(OpC40-443)<br />
Network communication problems occurred.<br />
(OpC40-427)<br />
Control Agent service not registered at Local Location<br />
Broker or DCE RPC Daemon on system stroppy.hp.com.<br />
(OpC20-160)<br />
Solution B<br />
When this message is displayed, you should increase the port range of<br />
the affected process.<br />
Problem C<br />
If an RPC server process finds the assigned port to be in use, the<br />
following message is produced:<br />
All ports in range ‘12001’ are in use. Performing dynamic<br />
allocation for ports out of specified range.<br />
(OpC20-167)<br />
Solution C<br />
In this situation, the RPC Server will be registered outside the assigned<br />
port range. This will cause the communication over the firewall to fail<br />
because the rules do not match the actual environment. Find out why the<br />
assigned port is in use, clear this <strong>and</strong> restart the server processes.<br />
Appendix C 197
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
198<br />
Known Issues in NAT Environments<br />
In a NAT environment, the following problems can be encountered.<br />
Disabling Remote Actions Also Disables Operator-Initiated<br />
Actions<br />
Problem<br />
By disabling remote actions, OVO will not execute action requests that<br />
originate from another system while action requests originating from the<br />
same system, for example, operator-initiated actions, are allowed. See<br />
OVO Administrator’s Reference.<br />
If this security feature is turned on in a NAT environment, this will<br />
disable all operator-initiated actions on address translated managed<br />
nodes because the agent cannot match the address in the action request<br />
to its own physical address.<br />
Solution<br />
There is no workaround available.<br />
Appendix C
Current Usage of the Port Range<br />
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
The netstat comm<strong>and</strong> can help finding which ports are currently in use<br />
<strong>and</strong> which ones are still free to use:<br />
netstat -n | grep ‘120.. ‘<br />
This will return a list similar to the following:<br />
tcp 15.136.120.163.12001 15.136.120.163.15008 ESTABLISHED<br />
tcp 15.136.120.163.12001 192.168.1.3.13006 ESTABLISHED<br />
tcp 15.136.120.163.12001 15.136.126.41.2719 ESTABLISHED<br />
tcp 15.136.120.163.12001 15.136.123.25.1055 ESTABLISHED<br />
tcp 15.136.120.163.12008 15.136.120.54.1055 TIME_WAIT<br />
tcp 15.136.120.163.12009 15.136.122.10.1690 ESTABLISHED<br />
tcp 15.136.120.163.12011 15.136.121.98.135 ESTABLISHED<br />
tcp 15.136.120.163.12014 15.136.123.25.135 ESTABLISHED<br />
tcp 15.136.120.163.12017 15.136.123.25.1032 ESTABLISHED<br />
tcp 15.136.120.163.12019 15.136.122.10.135 ESTABLISHED<br />
tcp 15.136.120.163.12024 192.168.1.3.135 TIME_WAIT<br />
tcp 15.136.120.163.12025 15.136.126.41.135 ESTABLISHED<br />
tcp 15.136.120.163.12026 15.136.120.163.15001 TIME_WAIT<br />
tcp 15.136.120.163.12027 15.136.121.98.2707 ESTABLISHED<br />
tcp 15.136.120.163.12028 192.168.1.3.13001 TIME_WAIT<br />
tcp 15.136.120.163.12029 15.136.126.41.2176 ESTABLISHED<br />
tcp 15.136.120.163.12030 15.136.120.54.135 TIME_WAIT<br />
It can be seen that four incoming message connections are connected to<br />
the Message Receiver’s RPC Server port (12001). Outgoing connections<br />
from the Request Sender block 13 connections of the assigned range<br />
(12006-12040).<br />
Appendix C 199
OVO Variables <strong>and</strong> Troubleshooting <strong>and</strong> DCE Managed Nodes<br />
Troubleshooting Problems<br />
200<br />
Communication Issues with NT Nodes<br />
Microsoft RPC’s are compatible to DCE RPC’s but they are not<br />
implemented in the same way; there is a different method of closing a<br />
connection. This causes connections from the management server to the<br />
NT node to stay established until there is a cleanup on the Unix system.<br />
This cleanup by default takes place every 5-15 minutes.<br />
This can cause an RPC client process that communicates with several<br />
nodes (Request sender <strong>and</strong> Remote Agent Tool opcragt) to block ports by<br />
leaving connections in an ESTABLISHED state for a long period of time.<br />
<strong>HP</strong>-UX<br />
<strong>HP</strong>-UX DCE allows you to configure the ESTABLISHED state time using<br />
the HDPCE_CLIENT_DISC_TIME environment variable<br />
Refer to “OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME” on page 183.<br />
Appendix C
Numerics<br />
12000 (Display <strong>Manager</strong>), 158<br />
12001 (Message Receiver), 158<br />
12002 (Distribution <strong>Manager</strong>), 158<br />
12003 (Communication <strong>Manager</strong>), 159<br />
12004-12005 (Forward <strong>Manager</strong>), 159<br />
12006-12040 (Request Sender), 155, 159<br />
12041-12050 (Remote Agent Tool), 155, 159<br />
12051-12060 (TCP Socket Server), 159<br />
12061 (NT Virtual Terminal), 159<br />
13001 (Control Agent), 191<br />
13002-13003 (Distribution Agent), 192<br />
13004-13006 (Message Agent), 192<br />
13007 (Communication Agent), 192<br />
13008-13009 (NT Virtual Terminal), 192<br />
A<br />
actions, troubleshooting, 198<br />
additional documentation, 18<br />
address translation. See network address<br />
translation; port address translation<br />
Adobe Portable Document Format. See PDF<br />
documentation<br />
agent installation, 156<br />
agent messages, 162<br />
agent upgrade, 156<br />
agents<br />
communicating with management server,<br />
27–28, 29<br />
installing, 49–50<br />
AIX<br />
default.txt<br />
details, 96<br />
summary, 92<br />
nodeinfo, 96, 175, 186<br />
opcinfo, 103<br />
variables, 173, 182<br />
B<br />
buffering messages, 193<br />
C<br />
certificate deployment<br />
managed nodes, 156<br />
checking<br />
communication settings, 78<br />
endpoint map, 78<br />
Checkpoint <strong>Firewall</strong>-1 integration, 97–100<br />
CLIENT_BIND_ADDR()<br />
variable, 175, 186<br />
Index<br />
CLIENT_PORT() variable, 175,<br />
187<br />
clients, RPC, 153<br />
coda, 28, 30<br />
comm<strong>and</strong>, netstat, 199<br />
communication<br />
agent/server<br />
filter rules, 98–99<br />
model process, 27–28, 29<br />
checking settings, 78<br />
embedded performance component, 88<br />
heartbeat<br />
live packets, 31<br />
monitoring, 31–32<br />
RCP only, 31<br />
model process<br />
descriptions, 27, 29<br />
flow, 27, 29<br />
MPE, 120<br />
NCS, 120<br />
objectives, 113<br />
ONC, 120<br />
OVO normal, 111<br />
OVO without endpoint mappers, 112<br />
port requirements<br />
manual deployment, 116<br />
remote deployment, 114<br />
port settings<br />
console, 54, 70, 171<br />
managed nodes, 54, 70, 171<br />
management server, 54, 69<br />
server/server, 99<br />
supported platforms, 120<br />
types<br />
DCE/TCP, 33<br />
DCE/UDP, 33, 82, 188<br />
Microsoft RPC, 34<br />
NCS, 34, 83, 189<br />
overview, 32<br />
Sun RPC, 34, 83, 189<br />
with RPCD, 118<br />
without RPCD, 119<br />
Communication Agent<br />
communication port settings, 70<br />
DCE managed nodes, 72<br />
description, 28, 30<br />
management server (13007), 192<br />
MC/ServiceGuard, 85<br />
Windows, 80<br />
201
Index<br />
Communication <strong>Manager</strong><br />
communication port settings, 54, 69<br />
DCE managed nodes, 72<br />
description, 30<br />
managed nodes (12003), 159<br />
MC/ServiceGuard, 85<br />
Windows, 80<br />
communicationg with Windows managed<br />
node outside firewall, 80<br />
configuration, 123<br />
managed nodes<br />
RPC clients, 124<br />
RPC server, 127<br />
management servers<br />
RPC clients, 129<br />
RPC servers, 130<br />
setting variables for processes, 123<br />
<strong>Configuration</strong> Deployment Tool<br />
managed nodes, 156<br />
configuration file<br />
opcinfo, 132<br />
server port specification file, 133<br />
syntax, 135<br />
configurations<br />
Checkpoint <strong>Firewall</strong>-1 integration, 97–100<br />
communication types, 82, 188<br />
distributing in PRC call, 87<br />
DNS, 48<br />
examples, 171, 181<br />
ICMP, 48<br />
Java GUI, 36<br />
MC/ServiceGuard, 85–86<br />
message forwarding, 38–40<br />
Motif GUI, 36<br />
SNMP, 48<br />
VP390/VP400, 41–42<br />
Windows managed nodes, 80<br />
configuring<br />
agent for network address translation, 103,<br />
105<br />
embedded performance component, 90–94<br />
firewall for DCE managed nodes, 55, 56,<br />
??–57, ??–58, 72<br />
managed nodes, 60, 75–77<br />
management server, 59, 73–74<br />
message forwarding in firewall<br />
environments, 40<br />
OVO for firewall environments, 35, 52–60,<br />
68–79<br />
202<br />
Performance <strong>Manager</strong>, 90–94<br />
ports<br />
embedded performance component, 89–90<br />
Java GUI, 36<br />
Reporter, 90–94<br />
connections, TCP socket, 154<br />
console communication port settings, 54, 70,<br />
171<br />
content filtering<br />
combining with port restrictions, 100<br />
description, 97<br />
OVO, 98–99<br />
Control Agent<br />
communication port settings, 70<br />
DCE managed nodes, 72<br />
description, 30<br />
management server (13001), 191<br />
MC/ServiceGuard, 85<br />
Windows, 80<br />
conventions, document, 13<br />
conventions, naming, 26<br />
D<br />
DCE<br />
configuring message forwarding, 40<br />
managed nodes<br />
configuring firewall, 55, 56, ??–57, ??–58,<br />
72<br />
filter rules, 55, 57, 72<br />
TCP, 33<br />
UDP, 33, 82, 188<br />
DCE NODE<br />
description, 26<br />
firewall rule, 171, 181<br />
MC/ServiceGuard, 85<br />
runtime DCE managed nodes, 55, 72<br />
DCE-RPC service, 98<br />
default.txt locations<br />
details, 96<br />
summary, 92<br />
defining size of port range, 161<br />
deployment<br />
manual<br />
port requirements, 116<br />
Remote<br />
port requirements, 114<br />
Developer’s Toolkit documentation, 18<br />
diagnostics, 143<br />
Display <strong>Manager</strong><br />
communication port settings, 69
managed nodes (12000), 158<br />
distributing configuration in PRC call, 87<br />
Distribution Agent<br />
communication port settings, 70<br />
description, 30<br />
management server (13002-13003), 192<br />
Distribution <strong>Manager</strong><br />
communication port settings, 69<br />
description, 30<br />
managed nodes (12002), 158<br />
MC/ServiceGuard, 85<br />
Windows, 80<br />
DNS queries, 48<br />
document<br />
configuration examples, 171, 181<br />
description, 26<br />
naming conventions, 26<br />
prerequisites, 26<br />
document conventions, 13<br />
documentation, related<br />
additional, 18<br />
Developer’s Toolkit, 18<br />
ECS Designer, 18<br />
Java GUI, 23–24<br />
Motif GUI, 21–22<br />
online, 19, 21–24<br />
PDFs, 15<br />
documentation,related<br />
print, 16<br />
documents, related, 168<br />
duplicate identical IP ranges, 44<br />
E<br />
ECS Designer documentation, 18<br />
Embedded Performance Component<br />
changing default port of LLP, 95<br />
communication port settings, 70<br />
configuring, 90–94<br />
ports, 89–90<br />
description, 28, 30, 88<br />
filter rules, 90<br />
multiple IP addresses, 61, 95<br />
OVO for Windows files, 96<br />
embedded performance component<br />
port usage<br />
management server (13008-13009), 192<br />
Endpoint Map<br />
checking, 78<br />
DCE managed nodes, 72<br />
MC/ServiceGuard, 85<br />
multiple management servers, 40<br />
Index<br />
Windows, 80<br />
error messages<br />
server port h<strong>and</strong>ling, 196–197<br />
unavailable ports, 194<br />
Event Correlation Service Designer. See ECS<br />
Designer documentation<br />
examples<br />
communications model process, 27, 29<br />
firewall<br />
configuration file, 171, 181<br />
rule, 171, 181<br />
F<br />
file locations<br />
default.txt, 92, 96<br />
nodeinfo, 96, 175, 186<br />
opcinfo, 103, 173, 182<br />
opcsvinfo, 173, 182<br />
filter rules<br />
agent installation<br />
UNIX, 50<br />
Windows, 49<br />
content filtering<br />
agent/server communication, 98–99<br />
server/server communication, 99<br />
description, 35, 52, 68<br />
embedded performance component, 90<br />
runtime DCE managed nodes, 55, 57, 72<br />
VP390, 42<br />
filtering content<br />
description, 97<br />
OVO, 98–99<br />
firewall<br />
communicating with Windows managed<br />
node outside, 80<br />
configuring<br />
example, 35<br />
for DCE managed nodes, 55, 56, ??–57,<br />
??–58, 72<br />
OVO for, 35, 52–60, 68–79<br />
description, 35, 52, 68<br />
MC/Service Guard, 85–86<br />
message forwarding, 38<br />
monitoring nodes inside <strong>and</strong> outside, 162<br />
network address translation, 43<br />
rule example, 171, 181<br />
tracing, 167<br />
VP390, 41<br />
white papers, 168<br />
Forward <strong>Manager</strong><br />
203
Index<br />
communication port settings, 69<br />
managed nodes (12004-12005), 159<br />
forwarding messages<br />
communication concepts, 39<br />
configuring in firewall environments, 40<br />
description, 38<br />
FTP troubleshooting in network address<br />
translation, 45–46<br />
G<br />
GUI<br />
documentation<br />
Java, 23–24<br />
Motif, 21–22<br />
GUIs<br />
Java, 36<br />
Motif, 36<br />
H<br />
heartbeat monitoring<br />
live packets, 31<br />
normal, 31<br />
overview, 31<br />
RCP only, 31<br />
<strong>HP</strong> OpenView Event Correlation Service<br />
Designer. See ECS Designer<br />
documentation<br />
<strong>HP</strong>-OpCctla, 98–99<br />
<strong>HP</strong>-OpCctla-bulk, 98–99<br />
<strong>HP</strong>-OpCctla-cfgpush, 98–99<br />
<strong>HP</strong>-OpCdistm, 98–99<br />
<strong>HP</strong>-OpCmsgrd-coa, 98–99<br />
<strong>HP</strong>-OpCmsgrd-m2m, 98–99<br />
<strong>HP</strong>-OpCmsgrd-std, 98–99<br />
<strong>HP</strong>-UX<br />
communication issues with NT nodes, 200<br />
ndd(1M), 164<br />
network tuning<br />
<strong>HP</strong>-UX 10.20, 163<br />
<strong>HP</strong>-UX 11.x, 164–165<br />
opcsvinfo file location, 173, 182<br />
HTTP proxy<br />
configuring<br />
ports for embedded performance<br />
component, 89–90<br />
Reporter <strong>and</strong> Performance <strong>Manager</strong>,<br />
91–94<br />
204<br />
I<br />
ICMP, 48<br />
installing agent, 49–50<br />
integration, Checkpoint <strong>Firewall</strong>-1, 97–100<br />
IP addresses<br />
multiple, 61, 95<br />
network address translation<br />
description, 43<br />
inside addresses, 63, 102<br />
inside <strong>and</strong> outside addresses, 64, 65–66,<br />
104–106<br />
IP masquerading, 65, 107<br />
outside addresses, 62, 101<br />
port address translation, 65, 107<br />
troubleshooting, 45–46, 198<br />
network address translationduplicate<br />
identical IP ranges, 44<br />
ito_op, 37<br />
ito_op.bat, 37<br />
J<br />
Java GUI, 36<br />
JAVA GUI filter rule<br />
as source, 36<br />
description, 26<br />
L<br />
live-packet heartbeat monitoring, 31<br />
Local Location Broker, changing default port<br />
of, 95<br />
M<br />
managed node<br />
opcinfo settings for RPC clients, 124<br />
opcinfo settings for RPC servers, 127<br />
variables, 139<br />
managed nodes<br />
agent installation, 156<br />
agent upgrade, 156<br />
certificate deployment, 156<br />
communication issues with NT nodes, 200<br />
Communication <strong>Manager</strong> (12003), 159<br />
communication port settings, 54, 70, 171<br />
<strong>Configuration</strong> Deployment Tool, 156<br />
configuring<br />
RPC clients, 124<br />
RPC server, 127<br />
configuring OVO, 60, 75–77<br />
DCE
configuring firewall, 55, 56, ??–57, ??–58,<br />
72<br />
filter rules, 55, 57, 72<br />
Display <strong>Manager</strong> (12000), 158<br />
Distribution <strong>Manager</strong> (12002), 158<br />
Forward <strong>Manager</strong> (12004-12005), 159<br />
Message Receiver (12001), 158<br />
monitoring nodes inside <strong>and</strong> outside<br />
firewall, 162<br />
NT Virtual Terminal (12061), 159<br />
port usage, 171, 191–192<br />
Remote Agent Tool (12041-12050), 155, 159<br />
Request Sender (12006-12040), 155, 159<br />
TCP Socket Server (12051-12060), 159<br />
verifying communication settings, 78<br />
Windows, 80<br />
management server<br />
communicating with agents, 27–28, 29<br />
Communication Agent (13007), 192<br />
configuring<br />
message forwarding, 40<br />
OVO, 59, 73–74<br />
Control Agent (13001), 191<br />
defining communication ports, 54, 69<br />
Distribution Agent (13002-13003), 192<br />
embedded performance component ports,<br />
192<br />
forwarding messages, 39<br />
Message Agent (13004-13006), 192<br />
NT Virtual Terminal (13008-13009), 192<br />
opcinfo settings for RPC clients, 129<br />
opcinfo settings for RPC servers, 130<br />
port usage, 155–160<br />
troubleshooting when server does not<br />
h<strong>and</strong>le port ranges automatically,<br />
196–197<br />
variables, 140<br />
verifying communication settings, 78<br />
management servers<br />
configuring<br />
RPC clients, 129<br />
RPC servers, 130<br />
manual deployment<br />
port requirements, 116<br />
map, checking endpoint, 78<br />
masquerading, IP, 65, 107<br />
MC/ServiceGuard in firewall environments,<br />
85–86<br />
message<br />
Index<br />
buffering, 193<br />
forwarding<br />
communication concepts, 39<br />
configuring in firewall environments, 40<br />
description, 38<br />
Message Agent<br />
communication port settings, 54, 70, 171<br />
description, 30<br />
management server (13004-13006), 192<br />
Message Receiver<br />
communication port settings, 69<br />
DCE managed nodes, 72<br />
description, 28, 30<br />
managed nodes (12001), 158<br />
MC/ServiceGuard, 85<br />
multiple management servers, 40<br />
Windows, 80<br />
messages. See agent messages; error<br />
messages; message<br />
MGD NODE, 26<br />
embedded performance component, 90<br />
SNMP, 48<br />
MGMT SRV<br />
agent installation<br />
UNIX, 50<br />
Windows, 49<br />
description, 26<br />
firewall rule, 171, 181<br />
Java GUI, 36<br />
NCS node ports, 83, 189<br />
runtime DCE managed nodes, 55, 72<br />
SNMP, 48<br />
Windows, 80<br />
Microsoft RPC, 34<br />
modification test<br />
server port specification file, 136<br />
monitoring, heartbeat<br />
live packets, 31<br />
normal, 31<br />
overview, 31<br />
RCP only, 31<br />
Motif GUI, 36<br />
Motif GUI documentation, 21–22<br />
multiple IP addresses, 61, 95<br />
N<br />
naming conventions, 26<br />
NAT. See network address translation<br />
NCS<br />
communication type, 83, 189<br />
205
Index<br />
description, 34<br />
NCS NODE<br />
description, 26<br />
NCS node ports, 83, 189<br />
ndd(1M)<br />
<strong>HP</strong>-UX, 164<br />
Solaris, 166<br />
netstat comm<strong>and</strong>, 199<br />
network address translation<br />
addresses<br />
inside, 63, 102<br />
inside <strong>and</strong> outside, 64, 65–66, 104–106<br />
outside, 62, 101<br />
configuring agent, 103, 105<br />
description, 43<br />
duplicate identical IP ranges, 44<br />
IP masquerading, 65, 107<br />
port address translation, 65, 107<br />
setting up responsible managers file,<br />
105–106<br />
troubleshooting, 45–46, 198<br />
Network Node <strong>Manager</strong> white paper, 168<br />
network tuning<br />
<strong>HP</strong>-UX<br />
10.20, 163<br />
11.x, 164–165<br />
Solaris, 166<br />
NNM. See Network Node <strong>Manager</strong> white<br />
paper<br />
nodeinfo, 96<br />
file location, 175, 186<br />
variables, 175–177, 186–190<br />
normal heartbeat monitoring, 31<br />
NT NODE<br />
agent installation, 49<br />
description, 26<br />
runtime managed nodes, 80<br />
NT nodes, communication issues with, 200<br />
NT Virtual Terminal<br />
communication port settings<br />
managed nodes, 70<br />
management server, 69<br />
port usage<br />
managed nodes (12061), 159<br />
management server (13008-13009), 192<br />
Windows, 80<br />
O<br />
online documentation<br />
description, 19<br />
206<br />
OPC_AGENT_NAT variable, 183<br />
OPC_COMM_PORT_RANGE variable, 183<br />
OPC_DIST_MODE variable, 184<br />
OPC_<strong>HP</strong>DCE_CLIENT_DISC_TIME<br />
variable, 183<br />
OPC_MAX_PORT_RETRIES variable, 184<br />
OPC_RESTART_TO_PROCS variable, 185<br />
OPC_RPC_ONLY variable, 185<br />
opccma, 28, 30<br />
opccmm, 30<br />
opcctla, 30<br />
opcdista, 30<br />
opcdistm, 30<br />
opcinfo<br />
example, 132<br />
file location, 103, 173, 182<br />
managed node, 139<br />
management server, 140<br />
settings<br />
managed node RPC clients, 124<br />
managed node RPC servers, 127<br />
management server RPC clients, 129<br />
management server RPC servers, 130<br />
variables, 171–174, 182–185<br />
opcmsga, 30<br />
opcmsgrd, 28, 30<br />
opcsvinfo<br />
file location, 173, 182<br />
variables, 171–174, 182–185<br />
opctss, 30<br />
OpenView Event Correlation Service<br />
Designer. See ECS Designer<br />
documentation<br />
OpenView <strong>Operations</strong>. See OVO<br />
operator-initiated actions, troubleshooting,<br />
198<br />
OVO<br />
communication with RPCD, 118<br />
communication without endpoint mappers,<br />
112<br />
communication without RPCD, 119, 120<br />
components affected, 121<br />
components not affected, 122<br />
configuring<br />
for firewall environments, 35, 52–60,<br />
68–79<br />
managed nodes, 60, 75–77<br />
management server, 59, 73–74<br />
filtering content, 98–99<br />
firewall white papers, 168<br />
installing agent, 49–50
normal communication, 111<br />
objectives of communication without<br />
endpoint mappers, 113<br />
verifying communication settings<br />
managed nodes, 78<br />
management server, 78<br />
ovoareqsdr, 28, 30<br />
P<br />
PACKAGE IP<br />
description, 26<br />
MC/ServiceGuard, 85<br />
PAT. See port address translation<br />
PDF documentation, 15<br />
Performance<br />
white paper, 168<br />
PERFORMANCE MANAGER<br />
description, 26<br />
embedded performance component, 90<br />
Performance <strong>Manager</strong><br />
communication port settings, 54, 70, 172<br />
configuring, 90–94<br />
description, 30<br />
PHYS IP NODE, 26<br />
port address translation, 65, 107<br />
port requirements<br />
manual deployment, 116<br />
remote deployment, 114<br />
Portable Document Format. See PDF<br />
documentation<br />
ports<br />
combining content filtering with port<br />
restrictions, 100<br />
configuring<br />
embedded performance component, 89–90<br />
Java GUI, 36<br />
embedded performance component, 192<br />
troubleshooting<br />
all assigned ports in use, 193–195<br />
current usage of port range, 199<br />
defining size of port range, 161<br />
error messages for server port h<strong>and</strong>ling,<br />
196–197<br />
error messages for unavailable ports, 194<br />
server does not h<strong>and</strong>le port ranges<br />
automatically, 196–197<br />
usage<br />
managed nodes, 171, 191–192<br />
management server, 155–160<br />
overview, 153<br />
Index<br />
print documentation, 16<br />
process, 137<br />
setting variables, 123<br />
process, communications model<br />
descriptions, 27, 29<br />
example, 27, 29<br />
PROXY<br />
filter rule<br />
description, 26<br />
embedded performance component, 90<br />
variable, 176, 187<br />
proxy, HTTP<br />
configuring<br />
ports for embedded performance<br />
component, 89–90<br />
Reporter <strong>and</strong> Performance <strong>Manager</strong>,<br />
91–94<br />
Q<br />
queries<br />
DNS, 48<br />
SNMP, 48<br />
R<br />
range, port<br />
current usage, 199<br />
defining size, 161<br />
RCP-only heartbeat monitoring, 31<br />
related documentation<br />
additional, 18<br />
Developer’s Toolkit, 18<br />
ECS Designer, 18<br />
online, 19, 21–24<br />
PDFs, 15<br />
print, 16<br />
Remote Agent Tool<br />
communication port settings, 54, 69<br />
managed nodes (12041-12050), 155, 159<br />
remote deployment<br />
port requirements, 114<br />
REPORTER<br />
description, 26<br />
embedded performance component, 90<br />
Reporter<br />
communication port settings, 54, 70, 172<br />
configuring, 90–94<br />
description, 30<br />
white paper, 168<br />
Request Sender<br />
207
Index<br />
communication port settings, 54, 69<br />
description, 28, 30<br />
managed nodes (12006-12040), 155, 159<br />
RPC<br />
clients, 153<br />
daemon, 30<br />
distributing configuration, 87<br />
Microsoft, 34<br />
servers, 153<br />
Sun, 34<br />
RPC clients<br />
configuring on managed nodes, 124<br />
configuring on management servers, 129<br />
opcinfo settings on OVO managed nodes,<br />
124<br />
opcinfo settings on OVO management<br />
servers, 129<br />
RPC server<br />
configuring on managed nodes, 127<br />
RPC servers<br />
configuring on management servers, 130<br />
opcinfo settings on OVO managed nodes,<br />
127<br />
opcinfo settings on OVO management<br />
servers, 130<br />
RPCD<br />
communication, 118<br />
communication without, 119, 120<br />
rpcd, 30<br />
rules, filter<br />
runtime DCE managed nodes, 55, 57, 72<br />
S<br />
Secure Java GUI<br />
configuring ports for, 36<br />
filter rule, 36<br />
server port specification file, 133<br />
modification test, 136<br />
syntax, 135<br />
SERVER_BIND_ADDR()<br />
variable, 177, 188<br />
SERVER_PORT() variable, 190<br />
servers, RPC, 153<br />
services, Checkpoint <strong>Firewall</strong>-1, 98<br />
setting up responsible managers file,<br />
105–106<br />
settings, communication port<br />
console, 54, 70, 171<br />
managed nodes, 54, 70, 171<br />
management server, 54, 69<br />
208<br />
SNMP queries, 48<br />
socket connections, TCP, 154<br />
Solaris<br />
ndd(1M), 166<br />
network tuning, 166<br />
opcsvinfo file location, 173, 182<br />
Sun RPC<br />
communication type, 83, 189<br />
description, 34<br />
T<br />
TCP<br />
configuring message forwarding, 40<br />
description, 33<br />
socket connections, 154<br />
TCP Socket Server<br />
communication port settings, 54, 69<br />
description, 30<br />
managed nodes (12051-12060), 159<br />
testing, 149<br />
tracing, 145<br />
translation. See network address translation;<br />
port address translation<br />
troubleshooting, 143<br />
agent messages, 162<br />
all assigned ports in use, 193–195<br />
communication issues with NT nodes, 200<br />
current usage of port range, 199<br />
defining size of port range, 161<br />
diagnostics, 143<br />
disabling remote actions disables<br />
operator-initiated actions, 198<br />
error messages<br />
server port h<strong>and</strong>ling, 196–197<br />
unavailable ports, 194<br />
FTP does not work, 45–46<br />
monitoring nodes inside <strong>and</strong> outside<br />
firewall, 162<br />
network tuning<br />
<strong>HP</strong>-UX 10.20, 163<br />
<strong>HP</strong>-UX 11.x, 164–165<br />
Solaris, 166<br />
server does not h<strong>and</strong>le port ranges<br />
automatically, 196–197<br />
testing, 149<br />
tracing, 145<br />
tracing firewall, 167<br />
types, communication<br />
DCE/TCP, 33
DCE/UDP, 33<br />
Microsoft RPC, 34<br />
NCS, 34<br />
overview, 32<br />
Sun RPC, 34<br />
typographical conventions. See document<br />
conventions<br />
U<br />
UDP<br />
DCE, 33<br />
NCS<br />
description, 34<br />
node ports, 83, 189<br />
UNIX<br />
agent installation filter rules, 50<br />
default.txt, 92, 96<br />
ito_op, 37<br />
nodeinfo, 96, 175, 186<br />
opcinfo, 103<br />
variables, 173, 182<br />
usage, port<br />
current usage of port range, 199<br />
managed nodes, 171, 191–192<br />
management server, 155–160<br />
overview, 153<br />
UX NODE, 26, 50<br />
V<br />
variable<br />
setting by process, 123<br />
variables, 139<br />
managed node, 139<br />
management server, 140<br />
nodeinfo, 175–177, 186–190<br />
opc[sv]info, 171–174, 182–185<br />
verifying communication settings<br />
managed nodes, 78<br />
management server, 78<br />
VP390/VP400, 41–42<br />
W<br />
web pages, related, 168<br />
white papers, additional, 168<br />
Windows<br />
agent installation filter rules, 49<br />
communication issues with NT nodes, 200<br />
default.txt<br />
details, 96<br />
summary, 92<br />
ito_op.bat, 37<br />
managed nodes, 80<br />
nodeinfo, 96, 175, 186<br />
opcinfo<br />
location, 103<br />
variables, 173, 182<br />
Index<br />
209