27.12.2012 Views

International Technical Support Organization Database Systems ...

International Technical Support Organization Database Systems ...

International Technical Support Organization Database Systems ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> <strong>Organization</strong><br />

<strong>Database</strong> <strong>Systems</strong> Management:<br />

IBM SystemView Information Warehouse<br />

DataHub Implementation and Connectivity<br />

June 1993<br />

GG24-4031-00


IBM<br />

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> <strong>Organization</strong><br />

<strong>Database</strong> <strong>Systems</strong> Management:<br />

IBM SystemView Information Warehouse<br />

DataHub Implementation and Connectivity<br />

June 1993<br />

GG24-4031-00


Take Note!<br />

Before using this information and the products it supports, be sure to read the general information under<br />

“Special Notices” on page xv.<br />

First Edition (June 1993)<br />

This edition applies to Version 1 Release 1 of the DataHub family of products for use with the MVS, VM,<br />

AS/400, and OS/2 operating systems:<br />

• IBM SystemView Information Warehouse DataHub/2, Program Number 5667-134<br />

• IBM SystemView Information Warehouse DataHub <strong>Support</strong>/MVS, Program Number 5695-166<br />

• IBM SystemView Information Warehouse DataHub <strong>Support</strong>/VM Feature of SQL/DS Version 3<br />

Release 3, Program Number 5688-103<br />

• IBM SystemView Information Warehouse DataHub <strong>Support</strong>/400, Program Number 5738-DM1<br />

• IBM SystemView Information Warehouse DataHub <strong>Support</strong>/2, Program Number 5648-026.<br />

Order publications through your IBM representative or the IBM branch office serving your locality.<br />

Publications are not stocked at the address given below.<br />

An ITSC <strong>Technical</strong> Bulletin Evaluation Form for readers′ feedback appears facing Chapter 1. If the<br />

form has been removed, comments may be addressed to:<br />

IBM Corporation, <strong>International</strong> <strong>Technical</strong> <strong>Support</strong> Center<br />

Dept. 471, Building 098<br />

5600 Cottle Road<br />

San Jose, California 95193-0001<br />

When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the<br />

information in any way it believes appropriate without incurring any obligation to you.<br />

© Copyright <strong>International</strong> Business Machines Corporation 1993. All rights reserved.<br />

Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or<br />

disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.


Abstract<br />

This document describes the implementation of the IBM SystemView Information<br />

Warehouse DataHub family of products to operate on all SAA platforms (MVS,<br />

VM, AS/400, OS/2). It provides guidelines on how to plan for DataHub, establish<br />

connectivity links in the DRDA and APPC environments, and configure DataHub.<br />

Problem determination and security issues are considered.<br />

This document is intended for customers and technical professionals who want<br />

to install and configure DataHub. A knowledge of distributed relational database<br />

systems and Distributed Relational <strong>Database</strong> Architecture (DRDA) connectivity is<br />

assumed.<br />

DS (285 pages)<br />

© Copyright IBM Corp. 1993 iii


iv DataHub Implementation and Connectivity


Contents<br />

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii<br />

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix<br />

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii<br />

Special Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv<br />

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii<br />

Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii<br />

Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii<br />

Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii<br />

How to Read This Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii<br />

Related Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi<br />

Related Product Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi<br />

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> Center Publications . . . . . . . . . . . . . . xxv<br />

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii<br />

Chapter 1. Introduction to DataHub . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />

1.1 DataHub Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

1.1.1 Platform Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4<br />

1.1.2 Tools Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4<br />

1.2 DataHub Data Flows: Overview . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

1.2.1 Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7<br />

1.2.2 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

1.2.3 Manage Authorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10<br />

1.2.4 Display Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />

1.2.5 Copy Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12<br />

1.3 Data Flow Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

Chapter 2. Planning for DataHub . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />

2.1 Customer Business Requirements . . . . . . . . . . . . . . . . . . . . . . . 19<br />

2.2 Sample Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />

2.2.1 Scenario 1: Local Management of Local Data . . . . . . . . . . . . . . 20<br />

2.2.2 Scenario 2: Central Management of Distributed Data . . . . . . . . . . 22<br />

2.2.3 Scenario 3: Distributed Management of Distributed Data . . . . . . . 24<br />

2.3 Factors Affecting Implementation . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.4 Planning for Implementation: A Checklist . . . . . . . . . . . . . . . . . . . 26<br />

2.4.1 Gather Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.4.2 Design the Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

2.4.3 Determine Skills Required . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />

2.4.4 Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />

2.5 Summary and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 30<br />

Chapter 3. DRDA Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

3.1 SNA Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

3.2 Key Connectivity Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />

3.2.1 SNA Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />

3.2.2 DRDA Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />

© Copyright IBM Corp. 1993 v


vi DataHub Implementation and Connectivity<br />

3.2.3 RDS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />

3.2.4 Cross-Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />

3.3 The ITSC Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />

3.4 MVS Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />

3.4.1 SNA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />

3.4.2 DRDA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />

3.4.3 Testing DRDA Connections . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />

3.5 VM Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />

3.5.1 SNA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />

3.5.2 DRDA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52<br />

3.5.3 Binding Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

3.6 AS/400 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60<br />

3.6.1 SNA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61<br />

3.6.2 DRDA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />

3.6.3 Testing DRDA Connections . . . . . . . . . . . . . . . . . . . . . . . . . 68<br />

3.6.4 Binding Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />

3.7 OS/2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />

3.7.1 OS/2 Host Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />

3.7.2 Relationship between OS/2 Components . . . . . . . . . . . . . . . . . 70<br />

3.7.3 General Customization Plan . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />

3.7.4 SNA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />

3.7.5 DRDA Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86<br />

3.7.6 Binding Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />

3.7.7 Testing DRDA and RDS Connections . . . . . . . . . . . . . . . . . . . 98<br />

3.7.8 Managing OS/2 <strong>Database</strong> Manager Directories . . . . . . . . . . . . . 98<br />

3.8 Adding a New DRDA Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />

3.8.1 DRDA Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />

3.8.2 DRDA Client/Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103<br />

3.9 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />

Chapter 4. DataHub/2 Workstation . . . . . . . . . . . . . . . . . . . . . . . . . 105<br />

4.1 Component Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106<br />

4.2 Scenario 1: Local Management of Local Data . . . . . . . . . . . . . . . . 110<br />

4.2.1 Required Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110<br />

4.2.2 Installation Procedure and Configuration . . . . . . . . . . . . . . . . 111<br />

4.2.3 Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />

4.3 Scenario 2: Central Management of Distributed Data . . . . . . . . . . . 114<br />

4.3.1 Required Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />

4.3.2 Installation Procedure and Configuration . . . . . . . . . . . . . . . . 115<br />

4.3.3 Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />

4.4 Scenario 3: Distributed Management of Distributed Data . . . . . . . . . 117<br />

4.4.1 Required Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />

4.4.2 DataHub/2 Execution: Data Requirements . . . . . . . . . . . . . . . 118<br />

4.4.3 Installation Environment Preparation . . . . . . . . . . . . . . . . . . 119<br />

4.4.4 DataHub/2 Workstation Installation as a Requester . . . . . . . . . . 121<br />

4.4.5 LAN Connectivity Parameters . . . . . . . . . . . . . . . . . . . . . . . 127<br />

4.4.6 OS/2 <strong>Database</strong> Manager Connectivity Parameters . . . . . . . . . . 143<br />

4.4.7 DataHub/2 <strong>Database</strong> Connectivity Parameters . . . . . . . . . . . . . 144<br />

4.5 LAN Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 144<br />

4.5.1 LAN Server Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 144<br />

4.5.2 Placement of LAN Server and DataHub/2 <strong>Database</strong> . . . . . . . . . 145<br />

4.6 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145<br />

4.6.1 Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145<br />

4.6.2 Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146<br />

4.7 Codepage <strong>Support</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148


4.8 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148<br />

4.8.1 Create the Administrator′s Folder . . . . . . . . . . . . . . . . . . . . 148<br />

Chapter 5. DataHub Tools Conversation Connectivity . . . . . . . . . . . . . . 151<br />

5.1 Key Connectivity Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 152<br />

5.2 The ITSC Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />

5.3 Configuring the DataHub/2 Workstation . . . . . . . . . . . . . . . . . . . 155<br />

5.3.1 Component Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 155<br />

5.3.2 DataHub/2 <strong>Database</strong> Definitions . . . . . . . . . . . . . . . . . . . . . 156<br />

5.3.3 OS/2 Communications Manager Definitions . . . . . . . . . . . . . . 161<br />

5.3.4 OS/2 <strong>Database</strong> Manager Definitions . . . . . . . . . . . . . . . . . . . 166<br />

5.4 Configuring MVS for DataHub <strong>Support</strong>/MVS . . . . . . . . . . . . . . . . 167<br />

5.4.1 Preparing for DataHub <strong>Support</strong>/MVS Installation . . . . . . . . . . . 167<br />

5.4.2 Installing DataHub <strong>Support</strong>/MVS Platform and Tools Feature . . . . 168<br />

5.4.3 Configuring DataHub <strong>Support</strong>/MVS Environment . . . . . . . . . . . 168<br />

5.4.4 Starting DataHub <strong>Support</strong>/MVS . . . . . . . . . . . . . . . . . . . . . . 171<br />

5.4.5 DataHub <strong>Support</strong>/MVS Connectivity Requirements . . . . . . . . . . 172<br />

5.5 Configuring VM for DataHub <strong>Support</strong>/VM . . . . . . . . . . . . . . . . . . 175<br />

5.5.1 VM/ESA Service Pool <strong>Support</strong> Facility . . . . . . . . . . . . . . . . . . 175<br />

5.5.2 DataHub <strong>Support</strong>/VM Operating System Overview . . . . . . . . . . 177<br />

5.5.3 Configuring DataHub <strong>Support</strong>/VM Environment . . . . . . . . . . . . 178<br />

5.5.4 Operating DataHub <strong>Support</strong>/VM . . . . . . . . . . . . . . . . . . . . . 187<br />

5.5.5 Performance Consideration . . . . . . . . . . . . . . . . . . . . . . . . 190<br />

5.6 Configuring AS/400 for DataHub <strong>Support</strong>/400 . . . . . . . . . . . . . . . . 191<br />

5.6.1 AS/400 Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . 191<br />

5.6.2 Configuration Considerations . . . . . . . . . . . . . . . . . . . . . . . 192<br />

5.7 Configuring OS/2 for DataHub <strong>Support</strong>/2 . . . . . . . . . . . . . . . . . . . 193<br />

5.7.1 OS/2 Host Components . . . . . . . . . . . . . . . . . . . . . . . . . . 193<br />

5.7.2 DataHub <strong>Support</strong>/2 Customization . . . . . . . . . . . . . . . . . . . . 194<br />

5.7.3 OS/2 Communications Manager Definitions . . . . . . . . . . . . . . 196<br />

5.7.4 OS/2 <strong>Database</strong> Manager Definitions . . . . . . . . . . . . . . . . . . . 201<br />

5.8 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202<br />

Chapter 6. Problem Determination . . . . . . . . . . . . . . . . . . . . . . . . . 203<br />

6.1 Implementation Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203<br />

6.2 DataHub/2 Workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204<br />

6.2.1 DataHub/2 Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205<br />

6.2.2 EMQDUMP Command . . . . . . . . . . . . . . . . . . . . . . . . . . . 207<br />

6.2.3 OS/2 Communications Manager Trace Utility . . . . . . . . . . . . . 208<br />

6.2.4 DataHub/2 Trace Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . 209<br />

6.2.5 Distributed <strong>Database</strong> Connection Services/2 Trace Utility . . . . . . 210<br />

6.3 MVS Host Problem Determination . . . . . . . . . . . . . . . . . . . . . . . 210<br />

6.3.1 DataHub <strong>Support</strong>/MVS Trace Files . . . . . . . . . . . . . . . . . . . . 210<br />

6.3.2 An Approach to Problem Diagnosis . . . . . . . . . . . . . . . . . . . 212<br />

6.3.3 Communication Problem Types . . . . . . . . . . . . . . . . . . . . . . 213<br />

6.3.4 Being Prepared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223<br />

6.3.5 Analyzing DB2 DDF Error Messages . . . . . . . . . . . . . . . . . . . 225<br />

6.4 VM Host Problem Determination . . . . . . . . . . . . . . . . . . . . . . . . 226<br />

6.4.1 Monitoring DataHub <strong>Support</strong>/VM Gateways and Conversation . . . 226<br />

6.4.2 Handling System Messages . . . . . . . . . . . . . . . . . . . . . . . . 226<br />

6.4.3 Diagnosing Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 228<br />

6.4.4 Problems Involving the CMR . . . . . . . . . . . . . . . . . . . . . . . 229<br />

6.4.5 Problems Involving the Task Handler and the Tools Feature . . . . 230<br />

6.5 AS/400 Host Problem Determination . . . . . . . . . . . . . . . . . . . . . 232<br />

6.5.1 The AS/400 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232<br />

Contents vii


viii DataHub Implementation and Connectivity<br />

6.5.2 Network Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233<br />

6.5.3 Return Codes and Error Messages . . . . . . . . . . . . . . . . . . . 236<br />

6.5.4 SQL Return Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236<br />

6.5.5 Request Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237<br />

6.5.6 Security Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237<br />

6.5.7 Debugging Tools Conversation Data Flow Errors at AS/400 . . . . . 238<br />

6.5.8 EMQ9088E Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238<br />

6.5.9 Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239<br />

6.6 OS/2 Host Problem Determination . . . . . . . . . . . . . . . . . . . . . . . 239<br />

Chapter 7. Security Considerations . . . . . . . . . . . . . . . . . . . . . . . . 241<br />

7.1 Security on the DataHub/2 Workstation . . . . . . . . . . . . . . . . . . . . 241<br />

7.1.1 User Profile Management . . . . . . . . . . . . . . . . . . . . . . . . . 241<br />

7.1.2 DataHub/2 User Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . 246<br />

7.1.3 DataHub/2 <strong>Database</strong> Privileges . . . . . . . . . . . . . . . . . . . . . . 248<br />

7.2 MVS Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />

7.2.1 DB2 Attachment to the Network Check . . . . . . . . . . . . . . . . . 248<br />

7.2.2 VTAM and RACF Partner LU Validation . . . . . . . . . . . . . . . . . 248<br />

7.2.3 DB2 Request Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />

7.3 VM Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251<br />

7.3.1 Controlling Access to the SQL/DS Application Server System . . . 251<br />

7.3.2 Controlling Access to the SQL/DS Application Requester System . 253<br />

7.4 AS/400 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254<br />

7.4.1 User Profile and Password Validation . . . . . . . . . . . . . . . . . . 254<br />

7.4.2 QEMQUSER Authorization . . . . . . . . . . . . . . . . . . . . . . . . . 254<br />

7.4.3 Security Administration . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />

7.5 OS/2 Managed Host Security . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />

7.5.1 RDS Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />

7.5.2 Tools Conversation Security . . . . . . . . . . . . . . . . . . . . . . . . 255<br />

7.6 Managing Password Changes . . . . . . . . . . . . . . . . . . . . . . . . . 256<br />

7.7 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />

Appendix A. OS/2 Communications Manager Network Definition File . . . . 259<br />

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265<br />

List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283<br />

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285


Figures<br />

1. DataHub and SystemView Strategy . . . . . . . . . . . . . . . . . . . . . . 1<br />

2. DataHub in Information Warehouse Framework . . . . . . . . . . . . . . . 2<br />

3. DataHub Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

4. Selecting the Copy Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

5. Data Flow Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

6. Copy Data Flows for MVS, VM, and OS/400 Platforms . . . . . . . . . . . 13<br />

7. Copy Data Involving OS/2 Hosts . . . . . . . . . . . . . . . . . . . . . . . . 14<br />

8. Copy Data from OS/2 Source to Unlike Target . . . . . . . . . . . . . . . . 15<br />

9. Copy Data to an OS/2 Target from Unlike Source . . . . . . . . . . . . . . 16<br />

10. Copy Data from OS/2 Host to OS/2 Host . . . . . . . . . . . . . . . . . . . 16<br />

11. Scenario 1: Local Management of Local Data . . . . . . . . . . . . . . . . 20<br />

12. Scenario 2: Central Management of Distributed Data . . . . . . . . . . . 22<br />

13. Scenario 3: Distributed Management of Distributed Data . . . . . . . . . 24<br />

14. ITSC Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />

15. Cross-Domain Resource Manager Definitions in MVS/VTAM . . . . . . . 41<br />

16. Cross Domain Resource Definitions in MVS/VTAM . . . . . . . . . . . . . 41<br />

17. LAN Address for the Network Control Program . . . . . . . . . . . . . . . 42<br />

18. MAXDATA Definition for the Network Control Program . . . . . . . . . . 43<br />

19. MAXBFRU and UNITSZ Definition for the Network Control Program . . . 43<br />

20. VTAM Startup Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />

21. SSCPNAME=SCG20 VTAM Startup Parameter List: ATCSTR00 . . . . . 44<br />

22. LU Definition for DB2 in the MVS VTAM Environment . . . . . . . . . . . 46<br />

23. IBMRDB LOGMODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />

24. DB2 to VTAM Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />

25. SYSIBM.SYSLOCATIONS CDB Table . . . . . . . . . . . . . . . . . . . . . 49<br />

26. SYSIBM.SYSLUNAMES CDB Table . . . . . . . . . . . . . . . . . . . . . . . 50<br />

27. SYSIBM.SYSUSERNAMES CDB Table . . . . . . . . . . . . . . . . . . . . . 50<br />

28. SYSIBM.SYSMODESELECT CDB Table . . . . . . . . . . . . . . . . . . . . 50<br />

29. SYSIBM.SYSLUMODES CDB Table . . . . . . . . . . . . . . . . . . . . . . 50<br />

30. AVS LU Definition for SQL/DS Environment . . . . . . . . . . . . . . . . . 52<br />

31. Main DRDA Components in an SQL/DS Environment . . . . . . . . . . . 53<br />

32. CMS RESID NAMES File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />

33. SQLVMA SQLDBN File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />

34. <strong>Database</strong> Machine Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 55<br />

35. AVS Machine Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />

36. AGWPROF GCS File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />

37. UCOMDIR NAMES File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />

38. AS/400 and VTAM: Matching Parameters . . . . . . . . . . . . . . . . . . 61<br />

39. VTAM: AS/400 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63<br />

40. ITSC AS/400 Network Attributes Definition . . . . . . . . . . . . . . . . . . 64<br />

41. AS/400: Token Ring Line Description Definition . . . . . . . . . . . . . . 64<br />

42. AS/400: Controller Description Definition . . . . . . . . . . . . . . . . . . 65<br />

43. AS/400: Device Description Definition . . . . . . . . . . . . . . . . . . . . 65<br />

44. ITSC AS/400 Configuration List Definition . . . . . . . . . . . . . . . . . . . 67<br />

45. AS/400: Local and Remote Relational <strong>Database</strong> Directory . . . . . . . . 68<br />

46. OS/2 Component Definition Relationships . . . . . . . . . . . . . . . . . . 71<br />

47. OS/2 Host Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />

48. OS/2 Communications Manager Relationships with VTAM Definitions . 75<br />

49. LAPS: Configure Workstation Window . . . . . . . . . . . . . . . . . . . . 76<br />

50. LAPS: Parameters for IBM Token-Ring Network Adapters Window . . . 77<br />

51. CM/2: Communications Manager Configuration Definition Window . . . 78<br />

© Copyright IBM Corp. 1993 ix


x DataHub Implementation and Connectivity<br />

52. CM/2: Communications Manager Profile List Sheet Window . . . . . . . 79<br />

53. CM/2: Token-Ring or Other LAN Types DLC Adapter Parameters<br />

Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79<br />

54. CM/2: Local Node Characteristics Window . . . . . . . . . . . . . . . . . 80<br />

55. CM/2: Connections List Window (Host Links) . . . . . . . . . . . . . . . . 82<br />

56. CM/2: Adapter List Window . . . . . . . . . . . . . . . . . . . . . . . . . . 82<br />

57. CM/2: Change Connection to a Host Window . . . . . . . . . . . . . . . . 83<br />

58. CM/2: Change Partner LUs Window . . . . . . . . . . . . . . . . . . . . . 84<br />

59. CM/2: SNA Features List Window (Modes) . . . . . . . . . . . . . . . . . 85<br />

60. CM/2: Change a Mode Definition Window . . . . . . . . . . . . . . . . . . 86<br />

61. OS/2 DBM: REXX Procedure to Define Directory Entries . . . . . . . . . 87<br />

62. OS/2 DBM: Directory Tool Window . . . . . . . . . . . . . . . . . . . . . . 88<br />

63. OS/2 DBM: Catalog Workstation Window . . . . . . . . . . . . . . . . . . 89<br />

64. OS/2 DBM: Workstation Directory List . . . . . . . . . . . . . . . . . . . . 90<br />

65. OS/2 DBM: Catalog <strong>Database</strong> Window . . . . . . . . . . . . . . . . . . . . 91<br />

66. (Part 1 of 2) OS/2 DBM: System <strong>Database</strong> Directory . . . . . . . . . . . . 92<br />

67. (Part 2 of 2) OS/2 DBM: System <strong>Database</strong> Directory . . . . . . . . . . . . 93<br />

68. OS/2 DBM: Catalog <strong>Database</strong> Window . . . . . . . . . . . . . . . . . . . . 95<br />

69. OS/2 DBM: DCS Directory List . . . . . . . . . . . . . . . . . . . . . . . . . 96<br />

70. OS/2 DBM: Listing of DataHub/2 <strong>Database</strong> Manager Server<br />

Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97<br />

71. OS/2 DBM: REXX Procedure to Print OS/2 <strong>Database</strong> Manager<br />

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99<br />

72. OS/2 DBM: REXX Procedure to Back Up the OS/2 <strong>Database</strong> Manager<br />

Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />

73. OS/2 DBM: REXX Procedure to Recover the OS/2 <strong>Database</strong> Manager<br />

Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />

74. Major Connectivity Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />

75. Scenario 1: Local Management of Local Data . . . . . . . . . . . . . . . 110<br />

76. Scenario 1: <strong>Database</strong> Catalogs and RDB Name Map File . . . . . . . . 112<br />

77. Scenario 2: Central Management of Distributed Data . . . . . . . . . . 114<br />

78. <strong>Database</strong> Connection Services Directory . . . . . . . . . . . . . . . . . . 115<br />

79. Scenario 3: Distributed Management of Distributed Data . . . . . . . . 117<br />

80. OS/2 <strong>Database</strong> Manager Directories . . . . . . . . . . . . . . . . . . . . 121<br />

81. DH/2 Install: Feature Select . . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />

82. DH/2 Install: Installation Type . . . . . . . . . . . . . . . . . . . . . . . . . 122<br />

83. DH/2 Install: Target Directories . . . . . . . . . . . . . . . . . . . . . . . . 123<br />

84. DH/2 Install: Update Configuration . . . . . . . . . . . . . . . . . . . . . . 124<br />

85. DH/2 Install: DataHub/2 <strong>Database</strong> Name . . . . . . . . . . . . . . . . . . 125<br />

86. DH/2 Install: Delimiter Characters for Commands . . . . . . . . . . . . 125<br />

87. DH/2 Install: LAN Workstation Identification . . . . . . . . . . . . . . . . 126<br />

88. DH/2 Install: Add Program to Desktop . . . . . . . . . . . . . . . . . . . 126<br />

89. DH/2 Install: Installation Confirmation . . . . . . . . . . . . . . . . . . . . 127<br />

90. LAN Configurable Resources . . . . . . . . . . . . . . . . . . . . . . . . . 129<br />

91. LAPS: Select an Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />

92. LAPS: Migrate or Configure . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />

93. LAPS: Configure Workstation . . . . . . . . . . . . . . . . . . . . . . . . . 134<br />

94. LAPS: Parameters for IBM IEEE 802.2, DataHub/2 Workstation . . . . 135<br />

95. LAPS: Parameters for 802.2, LAN Server . . . . . . . . . . . . . . . . . 136<br />

96. LAPS: Configure Workstation, DataHub/2 Workstation NETBIOS . . . . 137<br />

97. LAPS: Parameters for NETBIOS, DataHub/2 Workstation . . . . . . . . 138<br />

98. LAPS: Parameters for NETBIOS, DataHub/2 Server . . . . . . . . . . . . 139<br />

99. LAPS: Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139<br />

100. LAPS: PROTOCOL.INI, DataHub/2 Workstation . . . . . . . . . . . . . . 140<br />

101. LAPS: PROTOCOL.INI, LAN Server . . . . . . . . . . . . . . . . . . . . . 141


102. LAPS: IBMLAN.INI, DataHub/2 Workstation . . . . . . . . . . . . . . . . . 142<br />

103. LAPS 9A: IBMLAN.INI, LAN Server . . . . . . . . . . . . . . . . . . . . . . 142<br />

104. OS/2 <strong>Database</strong> Manager Connectivity Parameters: Client . . . . . . . . 143<br />

105. OS/2 <strong>Database</strong> Manager Connectivity Parameters: Server . . . . . . . 143<br />

106. DataHub/2 <strong>Database</strong> Active Applications: Server . . . . . . . . . . . . . 144<br />

107. CMD File for Codepage Change . . . . . . . . . . . . . . . . . . . . . . . 148<br />

108. A DataHub/2 OS/2 Desktop: Administrator′s Folder . . . . . . . . . . . 148<br />

109. DataHub Data Flows Used by DataHub Tool Functions . . . . . . . . . . 151<br />

110. DataHub/2: OS/2 Component Relationship . . . . . . . . . . . . . . . . 156<br />

111. DH/2: Selecting Add on Configure Pull-down Menu . . . . . . . . . . . 157<br />

112. DH/2: Add Host Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

113. DH/2: Display Related Objects Window . . . . . . . . . . . . . . . . . . 159<br />

114. DH/2: Adding an RDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />

115. DH/2: Add RDB Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160<br />

116. ITSC Hosts and Their RDB Names . . . . . . . . . . . . . . . . . . . . . . 161<br />

117. Partner LU Definition for an SAA RDB System . . . . . . . . . . . . . . 163<br />

118. SNA Features List Window (CPI Communications Side Information) . . 164<br />

119. Change CPI Communications Side Information . . . . . . . . . . . . . . 165<br />

120. OS/2: System <strong>Database</strong> Directory . . . . . . . . . . . . . . . . . . . . . . 167<br />

121. EMQMCI01 CLIST DataHub <strong>Support</strong>/MVS Data Set Concatenation . . . 169<br />

122. Installation Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170<br />

123. DHS/MVS: DATAHUB.V110.SEMQMJCL Members . . . . . . . . . . . . 170<br />

124. DHS/MVS: Abend 806, Module Not Found . . . . . . . . . . . . . . . . . 170<br />

125. DHS/MVS Data Set APF Authorization . . . . . . . . . . . . . . . . . . . 171<br />

126. DHS/MVS Startup Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172<br />

127. DataHub <strong>Support</strong>/MVS APPL and LOGMODE Entries . . . . . . . . . . . 173<br />

128. DataHub <strong>Support</strong>/MVS Tool Dispatcher VTAM Application Name<br />

Definition for EMQMACB1 APPL . . . . . . . . . . . . . . . . . . . . . . . 174<br />

129. DataHub <strong>Support</strong>/MVS SNASVCMG and EMQMLOGM Logon Modes . 174<br />

130. DataHub <strong>Support</strong>/MVS Startup Error . . . . . . . . . . . . . . . . . . . . 175<br />

131. VM Service Pool Environment . . . . . . . . . . . . . . . . . . . . . . . . 176<br />

132. DHS/VM Operating System Overview . . . . . . . . . . . . . . . . . . . 177<br />

133. DHS/VM: Summary of Configuration Steps . . . . . . . . . . . . . . . . . 179<br />

134. DHS/VM Environment Configuration Overview . . . . . . . . . . . . . . . 180<br />

135. DHS/VM Files and EXECs Overview . . . . . . . . . . . . . . . . . . . . . 181<br />

136. VTAM Sample AVS LU Definition for DataHub <strong>Support</strong>/VM . . . . . . . 183<br />

137. DHS/VM: AGWPROF GCS Example . . . . . . . . . . . . . . . . . . . . . 184<br />

138. DHS/VM: COMDIR NAMES File . . . . . . . . . . . . . . . . . . . . . . . . 185<br />

139. DHS/VM: Service Pool Configuration File . . . . . . . . . . . . . . . . . . 185<br />

140. DHS/VM: EMQVSYS CONFIG File . . . . . . . . . . . . . . . . . . . . . . 186<br />

141. DHS/VM: PROFILE EXEC of SPM00000 and SPM00001 Machines . . . . 186<br />

142. DHS/VM: EMQVTOOL CONFIG File . . . . . . . . . . . . . . . . . . . . . 187<br />

143. DHS/VM Communication Flow for ACTIVATE Command . . . . . . . . . 187<br />

144. DHS/VM Communication Flow for ALLOCATE Command . . . . . . . . 188<br />

145. DHS/VM Communication Flow for Completing Conversations . . . . . 190<br />

146. DHS/2: OS/2 Component Relationship Diagram . . . . . . . . . . . . . 194<br />

147. DHS/2: Configuration File (EMQ2SYS.CFG) . . . . . . . . . . . . . . . . 195<br />

148. Map File (DHS2.MAP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195<br />

149. SNA Features List Window (Transaction Program Definitions) . . . . . 197<br />

150. Create a Transaction Program Definition Window . . . . . . . . . . . . 198<br />

151. Create Additional TP Parameters Window . . . . . . . . . . . . . . . . . 198<br />

152. SNA Features List Window (Conversation Security) . . . . . . . . . . . 199<br />

153. CM/2: Create Conversation Security Window . . . . . . . . . . . . . . . 200<br />

154. Create Conversation Security Window (After Definition) . . . . . . . . 201<br />

155. DataHub <strong>Support</strong>/MVS Trace Parameter Specification . . . . . . . . . . 211<br />

Figures xi


xii DataHub Implementation and Connectivity<br />

156. DataHub <strong>Support</strong>/MVS DASD Trace Data Set Definition . . . . . . . . . 211<br />

157. DataHub <strong>Support</strong>/MVS DATAHUB.V110.A006.TDTRACE Data Set . . . . 212<br />

158. DataHub <strong>Support</strong>/MVS DATAHUB.V110.STDB2C.A0060002.TRACE Data<br />

Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212<br />

159. DDF: SQLCODE -30080, Communication Error . . . . . . . . . . . . . . . 225<br />

160. DB2: DSNL501I Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />

161. DHS/VM: Sample DataHub <strong>Support</strong>/VM AVS Console . . . . . . . . . . 227<br />

162. Sample DataHub <strong>Support</strong>/VM CMR Alert File . . . . . . . . . . . . . . . 230<br />

163. Example of DataHub <strong>Support</strong>/VM Trace File . . . . . . . . . . . . . . . . 231<br />

164. Sample DataHub <strong>Support</strong>/VM Alert File . . . . . . . . . . . . . . . . . . . 232<br />

165. AS/400: Verifying Communications . . . . . . . . . . . . . . . . . . . . . 233<br />

166. AS/400: DSPMSG QSYSOPR Command Output . . . . . . . . . . . . . . 234<br />

167. AS/400: Parameter ASTLVL in User Profile . . . . . . . . . . . . . . . . 235<br />

168. AS/400: Message Logging in Job Description . . . . . . . . . . . . . . . 235<br />

169. AS/400: Operator Message Queue Display Screen . . . . . . . . . . . 236<br />

170. AS/400 Detailed SQLCODE . . . . . . . . . . . . . . . . . . . . . . . . . . 237<br />

171. AS/400 Security Error Message . . . . . . . . . . . . . . . . . . . . . . . 237<br />

172. Problem Determination Entries in DataHub <strong>Support</strong>/2 Configuration File 240<br />

173. User Profile Management: Security Flow Diagram . . . . . . . . . . . . 244<br />

174. DH/2: User Profile Entry Selection Diagram . . . . . . . . . . . . . . . . 247<br />

175. DB2 Security Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />

176. VM Security Inbound Processing . . . . . . . . . . . . . . . . . . . . . . . 251


Tables<br />

1. Data Flows Used by Tool Functions . . . . . . . . . . . . . . . . . . . . . . 17<br />

2. Data Flows Used by Copy Data . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

3. SNA Key Connectivity Parameters Cross Reference . . . . . . . . . . . . 37<br />

4. DRDA Key Connectivity Parameters Cross Reference . . . . . . . . . . . 37<br />

5. The ITSC Network: SNA Key Connectivity Parameter Values, non-OS/2<br />

Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />

6. The ITSC Network: DRDA Key Connectivity Parameter Values, non-OS/2<br />

Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<br />

7. The ITSC Network: SNA Key Connectivity Parameter Values, OS/2 Hosts 39<br />

8. The ITSC Network: RDS Key Connectivity Parameter Values, OS/2 Hosts 39<br />

9. Communications Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />

10. OS/2 Local Node Definitions for the ITSC Network . . . . . . . . . . . . . 80<br />

11. OS/2 Partner LU Definitions for the ITSC Network . . . . . . . . . . . . . 85<br />

12. Sample Verification Procedures . . . . . . . . . . . . . . . . . . . . . . . . 98<br />

13. LAN Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130<br />

14. NETBIOS Session Information for DB2/2 . . . . . . . . . . . . . . . . . . 132<br />

15. DB2/2 NETBIOS Resource Requirements . . . . . . . . . . . . . . . . . . 132<br />

16. LAN Server and OS/2 Version Compatibility . . . . . . . . . . . . . . . . 144<br />

17. DataHub/2 Administrator′s Folder Definitions . . . . . . . . . . . . . . . 149<br />

18. DataHub/2: Key Connectivity Parameters . . . . . . . . . . . . . . . . . . 152<br />

19. ITSC Network: DataHub Key Connectivity Parameter Values . . . . . . 154<br />

20. ITSC Network: DataHub Key Connectivity Parameter Values . . . . . . 154<br />

21. DataHub/2 <strong>Database</strong> Definitions Used in the ITSC Network . . . . . . . 161<br />

© Copyright IBM Corp. 1993 xiii


xiv DataHub Implementation and Connectivity


Special Notices<br />

This publication is intended to help customers and technical professionals plan<br />

for and implement DataHub with APPC and DRDA connectivity. The information<br />

in this publication is not intended as the specification of any programming<br />

interfaces that are provided by the DataHub family of products. See the<br />

PUBLICATIONS section of the IBM Programming Announcement for DataHub<br />

family of products for more information about what publications are considered<br />

to be product documentation.<br />

References in this publication to IBM products, programs or services do not<br />

imply that IBM intends to make these available in all countries in which IBM<br />

operates. Any reference to an IBM product, program, or service is not intended<br />

to state or imply that only IBM′s product, program, or service may be used. Any<br />

functionally equivalent program that does not infringe any of IBM′s intellectual<br />

property rights may be used instead of the IBM product, program or service.<br />

Information in this book was developed in conjunction with use of the equipment<br />

specified, and is limited in application to those specific hardware and software<br />

products and levels.<br />

IBM may have patents or pending patent applications covering subject matter in<br />

this document. The furnishing of this document does not give you any license to<br />

these patents. You can send license inquiries, in writing, to the IBM Director of<br />

Commercial Relations, IBM Corporation, Purchase, NY 10577.<br />

The information contained in this document has not been submitted to any<br />

formal IBM test and is distributed AS IS. The use of this information or the<br />

implementation of any of these techniques is a customer responsibility and<br />

depends on the customer′s ability to evaluate and integrate them into the<br />

customer′s operational environment. While each item may have been reviewed<br />

by IBM for accuracy in a specific situation, there is no guarantee that the same<br />

or similar results will be obtained elsewhere. Customers attempting to adapt<br />

these techniques to their own environments do so at their own risk.<br />

Any performance data contained in this document was determined in a<br />

controlled environment, and therefore, the results that may be obtained in other<br />

operating environments may vary significantly. Users of this document should<br />

verify the applicable data for their specific environment.<br />

The following terms, which are denoted by an asterisk (*) in this publication, are<br />

trademarks of the <strong>International</strong> Business Machines Corporation in the United<br />

States and/or other countries:<br />

ACF/VTAM<br />

AIX<br />

AS/400<br />

BookManager<br />

CM/2<br />

CUA<br />

DATABASE 2<br />

DataHub<br />

DB2<br />

DB2/2<br />

© Copyright IBM Corp. 1993 xv


xvi DataHub Implementation and Connectivity<br />

Distributed <strong>Database</strong> Connection Services/2<br />

Distributed Relational <strong>Database</strong> Architecture<br />

DRDA<br />

IBM<br />

Information Warehouse<br />

MVS/ESA<br />

NetView<br />

OS/2<br />

OS/400<br />

PS/2<br />

QMF<br />

RACF<br />

SAA<br />

SNA<br />

SQL/DS<br />

SQL/400<br />

<strong>Systems</strong> Application Architecture<br />

<strong>Systems</strong> Network Architecture<br />

SystemView<br />

VM/ESA<br />

VTAM


Preface<br />

Objectives<br />

Audience<br />

Document Structure<br />

DataHub is a database systems management family of products for distributed<br />

relational databases located on MVS, VM, AS/400, and OS/2 systems. DataHub<br />

operates using APPC and DRDA protocols to connect to and manage the<br />

relational databases interconnected on the network.<br />

This book explains how to plan for DataHub and implement the DRDA and APPC<br />

connectivity between the relational databases and the DataHub workstation. The<br />

book clarifies the connectivity links by highlighting specific configuration<br />

definitions that must be implemented in the network as they relate to DataHub<br />

and the databases it manages. The document also examines capacity<br />

considerations, problem determination, and security issues related to DataHub.<br />

The book is written for technical professionals, database administrators, system<br />

programmers, and users who are involved in planning, configuring, and installing<br />

DataHub. The implementation of DataHub is a team effort. Each member of the<br />

team should have special skills in the area or platform (DB2, SQL/DS, AS/400,<br />

OS/2, DRDA, LAN, and VTAM) involved in the installation of DataHub. It is<br />

assumed that readers of this book have basic prerequisite knowledge in the<br />

areas in which they are interested. This book does not explain the basic<br />

functions of each SAA platform and networking environment involved in the<br />

DataHub implementation.<br />

The document is organized as follows:<br />

• Chapter 1, “Introduction to DataHub”<br />

This chapter introduces DataHub and positions it in the SystemView strategy<br />

and Information Warehouse framework. The chapter also presents an<br />

overview of the DataHub architecture, discusses the DataHub functions, and<br />

explains the data flows used by the DataHub functions between the different<br />

DataHub components.<br />

• Chapter 2, “Planning for DataHub”<br />

This chapter discusses how to plan for the implementation of DataHub in a<br />

customer environment. The chapter examines the factors affecting<br />

implementation and the customer business requirements in three sample<br />

scenarios. It also discusses the in-house skills a customer will need to<br />

develop, suggests naming conventions, and provides a checklist of<br />

implementation planning tasks.<br />

• Chapter 3, “DRDA Connectivity”<br />

This chapter addresses DRDA connectivity—a prerequisite for DataHub—with<br />

working examples. It describes the networking environment and the key<br />

parameters that will establish the connectivity between the network<br />

components including the physical and logical network connections and the<br />

© Copyright IBM Corp. 1993 xvii


How to Read This Document<br />

xviii DataHub Implementation and Connectivity<br />

DRDA application connections. The chapter also shows how to maintain and<br />

manage network changes. It concludes with some recommendations for a<br />

successful DRDA connectivity implementation.<br />

• Chapter 4, “DataHub/2 Workstation”<br />

This chapter discusses the components of DataHub/2 and the DataHub/2<br />

prerequisites that have to be installed on the OS/2 platform, including LAN<br />

requesters and server, OS/2 Communications Manager, OS/2 <strong>Database</strong><br />

Manager, and DDCS/2. The chapter presents various component<br />

combinations and explains the role of each component in a particular<br />

scenario.<br />

• Chapter 5, “DataHub Tools Conversation Connectivity”<br />

This chapter addresses DataHub tools conversation connectivity with working<br />

examples. DataHub is a DRDA application and needs to be defined as such.<br />

In addition to DRDA, DataHub uses APPC for its tools conversation flows.<br />

This chapter shows how to establish APPC connectivity for those tools<br />

conversation flows.<br />

• Chapter 6, “Problem Determination”<br />

This chapter addresses the problem determination issues of DataHub and<br />

the network on which it operates. It includes a generic method of dealing<br />

with LAN and network problems as well as DRDA and DataHub connectivity<br />

problem determination. The chapter also discusses problem determination<br />

tools that are available.<br />

• Chapter 7, “Security Considerations”<br />

This chapter addresses planning for network security. It includes an<br />

overview of the security issues on the different platforms of the network. The<br />

chapter discusses password modification issues in a distributed environment<br />

and offers some recommendations for dealing with different security<br />

systems.<br />

• Appendix A, “OS/2 Communications Manager Network Definition File”<br />

This appendix provides an example of the network definition file of a<br />

DataHub/2 workstation configuration.<br />

It is critical that you familiarize yourself with DataHub before reading this book.<br />

It is also critical that you understand the different data flows DataHub uses<br />

because you will have to implement the related connectivity parameters—the<br />

most important part of the DataHub implementation. To this end, Chapter 1<br />

presents an overview of DataHub and the data flows it uses. We also<br />

recommend that you read the <strong>Database</strong> <strong>Systems</strong> Management: IBM SystemView<br />

Information Warehouse DataHub Presentation Guide, where DataHub functions<br />

are further explained.<br />

DataHub planning is discussed in Chapter 2, which is a self-contained chapter.<br />

For planning purposes we also recommend that you read Chapter 4, which<br />

discusses more specifically the DataHub/2 environment, and Chapter 7, which<br />

deals with DataHub security planning.<br />

The installation of DataHub consists primarily of implementing the different<br />

connections the product uses. DRDA connectivity is a major prerequisite for


DataHub. We recommend that you install DRDA connectivity before you install<br />

DataHub. The implementation of DRDA connectivity is covered in Chapter 3,<br />

which is a self-contained chapter that you may find helpful when installing any<br />

DRDA environment.<br />

Chapters 4 and 5 explain the implementation tasks to be carried out after the<br />

DRDA installation has been consolidated among the participating SAA platforms.<br />

Chapter 5 more specifically deals with the implementation of DataHub tools<br />

conversation connectivity.<br />

Chapters 3, 4, and 5 are structured according to the order of the installation<br />

tasks we recommend, that is, DRDA connectivity first, the DataHub/2<br />

environment second, and tools conversation third.<br />

Chapter 6 deals with problem determination and is a self-contained chapter.<br />

Chapter 7 deals with security and is also a self-contained chapter.<br />

Preface xix


xx DataHub Implementation and Connectivity


Related Publications<br />

Related Product Documentation<br />

The publications listed below are considered particularly suitable for a more<br />

detailed discussion of the topics covered in this document.<br />

DataHub Documentation<br />

DataHub General Information, GC26-4874<br />

DataHub/2 Message Reference, SC26-3042<br />

DataHub/2 Command Reference, SC26-3044<br />

DataHub/2 User′s Guide, SC26-3045<br />

DataHub/2 Installation and Administration Guide, SC26-3043<br />

DataHub <strong>Support</strong>/MVS Installation and Operations Guide, SC26-3041<br />

DataHub <strong>Support</strong>/VM Installation and Operations Guide, GC09-1322<br />

DataHub <strong>Support</strong>/400 Installation and Operations Guide, SC41-0099<br />

DataHub <strong>Support</strong>/2 Installation and Operations Guide, SC26-3185<br />

DataHub Tool Builder′s Guide and Reference, SC26-3046<br />

SystemView Documentation<br />

SAA: An Introduction to SystemView, GC23-0576<br />

SAA: SystemView Concepts, SC23-0578-00<br />

SAA: SystemView System Manager/400 User′s Guide, SC41-8201<br />

SAA: SystemView End-Use Dimensions Consistency Guide Concepts,<br />

SC33-6472<br />

SAA: A Guide for Evaluating SystemView Integration, GC28-1383<br />

Information Warehouse Documentation<br />

An Introduction to Information Warehousing, GC26-4876<br />

Information Warehouse Architecture 1, SC26-3244<br />

SAA Documentation<br />

SAA Overview, GC26-4341<br />

SAA Common Programming Interface: Summary, GC26-4675<br />

Concepts of Distributed Data, SC26-4417<br />

SAA Common Programming Interface: Communications Reference, SC26-4399<br />

SAA <strong>Database</strong> Reference, SC26-4348<br />

SAA <strong>Database</strong> Level 2 Reference, SC26-4798<br />

SAA Common User Access: Advanced Interface Design Guide, SC34-4290<br />

SAA Common User Access: Basic Interface Design Guide, SC26-4583<br />

SAA Common Communications <strong>Support</strong>: Summary, GC31-6810<br />

© Copyright IBM Corp. 1993 xxi


xxii DataHub Implementation and Connectivity<br />

Distributed Relational <strong>Database</strong> Library Documentation<br />

SAA Introduction to Distributed Data, GC26-4831<br />

Distributed Relational <strong>Database</strong> Architecture Reference, SC26-4651<br />

Distributed Relational <strong>Database</strong> Connectivity Guide, SC26-4783<br />

Distributed Relational <strong>Database</strong> Application Programming Guide, SC26-4773<br />

Planning for Distributed Relational <strong>Database</strong>, SC26-4650<br />

Distributed Relational <strong>Database</strong> Problem Determination Guide, SC26-4782<br />

IBM Distributed Data Management Level 3 Architecture: Reference, SC21-9526<br />

DB2 Documentation<br />

IBM DATABASE 2 Version 2 General Information, GC26-4373<br />

IBM DATABASE 2 Version 2 Administration Guide, SC26-4374<br />

IBM DATABASE 2 Version 2 Licensed Program Specification, GC26-4375<br />

IBM DATABASE 2 Version 2 SQL Reference, SC26-4380<br />

IBM DATABASE 2 Version 2 Application Programming and SQL Guide,<br />

SC26-4377<br />

IBM DATABASE 2 Version 2 Command and Utility Reference, SC26-4378<br />

IBM DATABASE 2 Version 2 Messages and Codes, SC26-4379<br />

IBM DATABASE 2 Version 2 Master Index, GC26-4772<br />

IBM DATABASE 2 Version 2 Reference Summary, SX26-3771<br />

IBM DATABASE 2 Version 2 Diagnosis Guide and Reference, LY27-9536,<br />

available to IBM licensed customers only<br />

SQL/DS Documentation<br />

SQL/DS General Information, GH09-8074<br />

SQL/DS Licensed Program Specifications, GH09-8076<br />

SQL/DS Operation, SH09-8080<br />

SQL/DS <strong>Database</strong> Administration, GH09-8083<br />

SQL/DS System Administration, GH09-8084<br />

SQL/DS Managing SQL/DS, SH09-8077<br />

SQL/DS Installation, GH09-8078<br />

SQL/DS Application Programming, SH09-8086<br />

SQL/DS <strong>Database</strong> Services Utility, SH09-8088<br />

SQL/DS SQL Reference, SH09-8087<br />

SQL/DS Interactive SQL Guide and Reference, SH09-8085<br />

SQL/DS Master Index, SH09-8089<br />

SQL/DS Reference Summary, SX09-1173<br />

SQL/DS Diagnosis Guide and Reference, LH09-8081<br />

SQL/DS Messages and Codes, SH09-8079


AS/400 Documentation<br />

OS/400 Distributed Relational <strong>Database</strong> Guide, SC41-0025<br />

OS/400 Question-and-Answer <strong>Database</strong> Coordinator′s Guide, SC41-8086<br />

OS/400 <strong>Database</strong> Guide, SC41-9659<br />

OS/400 Distributed Data Management Guide, SC41-9600<br />

SAA Structured Query Language/400 Programmer′s Guide, SC41-9609<br />

SAA Structured Query Language/400 Reference, SC41-9608<br />

OS/400 System Concepts, GC41-9802<br />

AS/400 Basic Security Guide, SC41-0047<br />

AS/400 Security Reference, SC41-8083<br />

AS/400 Communications Management Guide, SC41-0024<br />

AS/400 Work Management Guide, SC41-8078<br />

AS/400 Local Area Network Guide, SC41-0004<br />

AS/400 APPN Guide, SC41-8188<br />

AS/400 APPC Programmer′s Guide, SC41-8189<br />

AS/400 Network Planning Guide, GC41-9861<br />

AS/400 Communications Configuration Reference, SC41-0001<br />

Extended Services for OS/2 Documentation<br />

IBM Extended Services for OS/2, G326-0161<br />

IBM Extended Services for OS/2 Messages and Error Recovery Guide,<br />

SO4G-1017<br />

IBM Extended Services for OS/2 Communications Manager APPC<br />

Programming Reference, S04G-1025<br />

IBM Extended Services for OS/2 Command Reference, S04G-1020<br />

IBM Extended Services for OS/2 Communications Manager Configuration<br />

Guide, S04G-1002<br />

IBM Extended Services for OS/2 <strong>Database</strong> Manager Programming Guide and<br />

Reference, S04G-1022<br />

IBM Extended Services for OS/2 Guide to <strong>Database</strong> Manager, S04G-1013<br />

IBM Extended Services for OS/2 Guide to <strong>Database</strong> Manager Client<br />

Application Enablers, S04G-1114<br />

IBM Extended Services for OS/2 Guide to User Profile Management,<br />

S04G-1112<br />

IBM Extended Services for OS/2 Hardware and Software Reference,<br />

S04G-1017<br />

IBM Extended Services Structured Query Language (SQL) Reference,<br />

S04G-1012<br />

Related Publications xxiii


DB2/2 Documentation<br />

DATABASE 2 OS/2 Guide, S62G-3663<br />

DATABASE 2 OS/2 Command Reference, S62G-3670<br />

DATABASE 2 OS/2 Information and Planning Guide, S62G-3662<br />

DATABASE 2 OS/2 Installation Guide, S62G-3664<br />

DATABASE 2 OS/2 Master Index and Glossary, S62G-3669<br />

DATABASE 2 OS/2 Messages and Problem Determination Guide, S62G-3668<br />

DATABASE 2 OS/2 Programming Guide, S62G-3665<br />

DATABASE 2 OS/2 Programming Reference, S62G-3666<br />

DATABASE 2 OS/2 SQL Reference, S62G-3667<br />

DDCS/2 Documentation<br />

Distributed <strong>Database</strong> Connection Services/2 Guide, S04G-1090<br />

Distributed <strong>Database</strong> Connections Services/2 Version 2 Guide, S62G-3792<br />

CM/2 Documentation<br />

IBM Communications Manager/2 Version 1.0 User′s Guide, SC31-6108<br />

IBM Communications Manager/2 Version 1.0 Quick Installation, SX75-0085<br />

IBM Communications Manager/2 Version 1.0 Start Here, SC31-6104<br />

IBM Communications Manager/2 Version 1.0 Information and Planning Guide,<br />

SC31-7007<br />

IBM Communications Manager/2 Version 1.0 Workstation Installation Guide,<br />

SC31-6169<br />

IBM Communications Manager/2 Version 1.0 Configuration Guide, SC31-6171<br />

IBM Communications Manager/2 Version 1.0 Configuration Worksheets,<br />

SX75-0088<br />

IBM Communications Manager/2 Version 1.0 APPC Programming Guide and<br />

Reference, SC31-6160<br />

IBM Communications Manager/2 Version 1.0 Keyboard Templates, SX75-0073<br />

VM/ESA Documentation<br />

VM/ESA Connectivity Planning, Administration, and Operation, SC24-5448<br />

VTAM Documentation<br />

VTAM Network Implementation Guide, SC31-6434<br />

VTAM Resource Definition Reference, SC31-6438<br />

VTAM Operations, SC31-6435<br />

VTAM Messages and Codes, SC31-6433<br />

VTAM Programming, SC31-6436<br />

VTAM Programming for LU6.2, SC31-6437<br />

VTAM Diagnosis, LY43-0059<br />

xxiv DataHub Implementation and Connectivity<br />

VTAM Data Areas for MVS/ESA, LY43-0057<br />

VTAM Data Areas for VM, LY43-0058


VTAM Reference Summary, LY43-0060<br />

NCP Documentation<br />

NCP, SSP, and EP Resource Definition Guide, SC30-3447<br />

NCP, SSP, and EP Resource Definition Reference, SC30-3448<br />

NCP, SSP, and EP Messages and Codes, SC30-3169<br />

NCP, SSP, and EP Diagnosis Guide, LY30-5591<br />

NCP and EP Reference, LY30-5605<br />

NCP and EP Reference Summary and Data Areas, LY30-5603<br />

SNA Documentation<br />

SNA <strong>Technical</strong> Overview, GC30-3073<br />

SNA Formats, GA27-3073<br />

SNA Transaction Programmer′s Reference Manual for LU Type 6.2, GC30-3084<br />

SNA Format and Protocol Reference Manual: Architecture Logic for LU Type<br />

6.2, GC30-3269<br />

SNA LU6.2 Reference: Peer Protocols, GC30-3073<br />

RACF Documentation<br />

System Programming Library: Resource Control Facility (RACF), SC28-1343<br />

Resource Control Facility (RACF) Diagnosis Guide, LY28-1016<br />

Resource Control Facility (RACF) Command Language Reference, SC28-0733<br />

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> Center Publications<br />

Introduction to Relational Data, GG24-3300<br />

DB2 Distributed <strong>Database</strong> Application Implementation and Installation Primer,<br />

GG24-3400<br />

Distributed Relational <strong>Database</strong> Application Scenarios, GG24-3513<br />

Distributed Relational <strong>Database</strong> Planning and Design Guide for DB2 Users,<br />

GG24-3755<br />

Distributed Relational <strong>Database</strong> Remote Unit of Work Implementation<br />

DB2-DB2 and DB2-SQL/DS Volume 1 and Volume 2, GG24-3715 and<br />

GG24-3716<br />

Using SQL/DS in a DRDA Environment with DB2 and OS/2 DBM, GG24-3733<br />

Distributed Relational <strong>Database</strong> Using OS/2 DRDA Client <strong>Support</strong> with DB2,<br />

GG24-3771<br />

OS/2 DDCS/2 and DB2 V2.3 Distributed Relational <strong>Database</strong> Performance:<br />

Early Experience, GG24-3926<br />

Information Warehouse Implementation Overview, GG24-3869<br />

<strong>Database</strong> <strong>Systems</strong> Management: IBM SystemView Information Warehouse<br />

DataHub Presentation Guide, GG24-3952<br />

Related Publications xxv


xxvi DataHub Implementation and Connectivity


Acknowledgments<br />

This document is the result of a residency project conducted at the <strong>International</strong><br />

<strong>Technical</strong> <strong>Support</strong> Center-San Jose in the first quarter of 1993.<br />

The project was designed and managed by:<br />

• Viviane Anavi-Chaput, DB2 Assignee, ITSC-San Jose.<br />

We want to acknowledge the excellent work of:<br />

• Marc Boulanger IBM Canada<br />

• Gord Bruce IBM Canada<br />

• Walter Brungnole IBM Brazil<br />

• Albert Charles IBM France<br />

• Jose Carlos Duarte Goncalves IBM Brazil<br />

Special thanks are due to the following people for the invaluable advice and<br />

guidance they provided in the production of this document:<br />

• Tadakatsu Azuma IBM Japan<br />

• Linnette Bakow IBM Development, Santa Teresa<br />

• George Barta IBM Development, Rochester<br />

• Lindsay Bennion IBM Development, Santa Teresa<br />

• Ritsuko Boh IBM Japan<br />

• Doreen Fogle IBM Development, Santa Teresa<br />

• Stan Hoey IBM UK<br />

• Claude Isoir IBM France<br />

• Yukho Iwamoto IBM Japan<br />

• Chris McCarthy IBM Development, Santa Teresa<br />

• Jim McLaughlin IBM Development, Santa Teresa<br />

• Barb Lidstrom BOEING Computer Services<br />

• Masaharu Murozumi IBM Japan<br />

• Dave Reisner IBM Development, Santa Teresa<br />

• Kamran Saadatjoo Analyst international Corporation<br />

• Patrick See IBM Development, Santa Teresa<br />

• Rick Swagerman IBM Development, Toronto.<br />

• Bill Tri IBM Australian Programming Center<br />

Thanks also to Maggie Cutler for her editorial assistance.<br />

Viviane Anavi-Chaput, Project Leader<br />

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> Center-San Jose<br />

June 1993<br />

© Copyright IBM Corp. 1993 xxvii


xxviii DataHub Implementation and Connectivity


Chapter 1. Introduction to DataHub<br />

DataHub* is a family of products designed to meet the challenges of managing<br />

complex database environments while increasing productivity and reducing<br />

costs. DataHub provides integrated database systems management functions for<br />

IBM* <strong>Systems</strong> Application Architecture* (SAA*) relational database management<br />

systems (RDBMSs).<br />

This chapter introduces DataHub and the data flows it uses for its different tool<br />

functions. A more detailed explanation of DataHub can be found in the <strong>Database</strong><br />

<strong>Systems</strong> Management: IBM SystemView Information Warehouse DataHub<br />

Presentation Guide.<br />

DataHub and SystemView Strategy<br />

DataHub is a SystemView* conforming product in its user interface design and<br />

approach to tools integration. Figure 1 positions DataHub in the SystemView<br />

strategy.<br />

Figure 1. DataHub and SystemView Strategy<br />

Users can perform system management tasks on relational data anywhere in the<br />

enterprise from a workstation control point (DataHub/2 workstation). A<br />

consistent, object-action-based graphical user interface (GUI) gives a common<br />

“look and feel” to all DataHub functions. Management includes all the traditional<br />

data and database administration functions and is independent of both data<br />

location and physical data characteristics. Management of RDBMSs is defined by<br />

SystemView and <strong>Database</strong> Management.<br />

© Copyright IBM Corp. 1993 1


DataHub in Information Warehouse Framework<br />

DataHub′s object-action-based platform feature and integrated tools feature help<br />

database administrators manage database objects throughout the enterprise.<br />

DataHub′s tool integration capability is open and addresses database systems<br />

management tasks in the areas of change, configuration, operations, and<br />

problem management. Tool functions are initiated from the DataHub/2<br />

workstation and may invoke other tool functions on the host systems. The tool<br />

functions are written to predefined standards and use common services and<br />

documented open interfaces, namely, Distributed Relational <strong>Database</strong><br />

Architecture* (DRDA*), Common Programming Interface - Communications<br />

(CPI-C), and Common User Access (CUA*).<br />

Figure 2 positions DataHub in the Information Warehouse* framework.<br />

Figure 2. DataHub in Information Warehouse Framework<br />

DataHub helps database administrators create and manage the Information<br />

Warehouse environment in the following ways:<br />

• Manage databases from single control point<br />

From the DataHub/2 workstation DataHub helps manage operational<br />

databases and information stores. DataHub provides a window on all<br />

database objects and users throughout the information system.<br />

• Display database objects<br />

2 DataHub Implementation and Connectivity<br />

DataHub displays and provides information on relational database objects<br />

and their related components anywhere in the enterprise.<br />

• Copy data to an information store<br />

DataHub has the facility to copy database objects from one database to<br />

another.


• Manage authorizations on relational data<br />

1.1 DataHub Architecture Overview<br />

With DataHub it is possible to display the authorizations for all objects and<br />

users of a database in the system and then copy, add, or delete those<br />

authorizations.<br />

DataHub consists of a workstation component and host components. Each<br />

DataHub component has a platform feature and an optional tools feature, both of<br />

which operate as a client/server application.<br />

The DataHub functions significantly simplify the tasks of database administrators,<br />

system administrators, help desk personnel, system programmers, and<br />

application programmers in a centralized or distributed database environment.<br />

Figure 3 shows a possible DataHub configuration that supports four SAA hosts:<br />

MVS, VM, OS/400, and OS/2.<br />

Figure 3. DataHub Architecture<br />

Each main DataHub user—DBA, Help Desk, and Sys Prog—has a DataHub/2<br />

workstation that includes a GUI.<br />

Chapter 1. Introduction to DataHub 3


1.1.1 Platform Feature<br />

1.1.2 Tools Feature<br />

4 DataHub Implementation and Connectivity<br />

On each host, there is a host-specific DataHub component called DataHub<br />

<strong>Support</strong>. The host component does not have a GUI; user access to DataHub<br />

<strong>Support</strong> is from a DataHub/2 workstation only. In the same way, tools can be<br />

initiated only from a DataHub/2 workstation.<br />

The DataHub product set includes:<br />

• The workstation component, including:<br />

− DataHub/2 workstation platform feature<br />

− Optional tools feature<br />

− Optional requester feature<br />

• The host components, including:<br />

− DataHub <strong>Support</strong>/400 host platform feature with integrated tools feature<br />

− DataHub <strong>Support</strong>/MVS host platform feature with an optional tools<br />

feature<br />

− DataHub <strong>Support</strong>/VM host platform feature with integrated tools feature<br />

− DataHub <strong>Support</strong>/2 host platform feature with an optional tools feature.<br />

Each DataHub <strong>Support</strong> host component is ordered separately.<br />

The DataHub platform feature includes common services that provide<br />

communications and run-time function between the DataHub/2 workstation and<br />

the hosts. These common services facilitate the development, integration, and<br />

execution of database systems management tools.<br />

The Run function is part of the platform feature and is used to execute SQL, MVS<br />

JCL, and Application <strong>Systems</strong>/400 SQL (AS4SQL) statements at a host. Those<br />

statements are stored in an OS/2 file that DataHub/2 can recognize. If the input<br />

file to Run contains only SQL, DRDA flows are used for all platforms except<br />

OS/2, where RDS is used. For MVS JCL and AS4SQL statements, tools<br />

conversation flows are used to send the file contents to DataHub <strong>Support</strong> at the<br />

target host.<br />

The DataHub Tool Builder′s Guide and Reference describes how to use the<br />

DataHub common services when developing database systems management<br />

tools.<br />

The DataHub tools feature provides database systems management functions,<br />

such as copying objects and managing authorizations. You can access the<br />

functions through the DataHub/2 window, by entering the command and required<br />

syntax from the command line, or by starting the command from the command<br />

line and completing it in the DataHub/2 window. Commands can also be stored<br />

in a command file and then retrieved and executed, thus reducing the effort<br />

required for repetitive tasks.<br />

The tools feature functions include the following:<br />

• Utilities<br />

The Utilities functions call utilities at a host. Not all Utilities functions are<br />

available on all database managers. For instance, the local OS/2 <strong>Database</strong><br />

Manager (DBM) provides backup and recovery of OS/2 databases with a GUI.


The supported utilities are:<br />

− Load<br />

− Unload<br />

− Backup<br />

− Recover<br />

− Reorganize<br />

− Update Statistics.<br />

• Copy Data<br />

The Copy Data function enables you to copy object definitions like table,<br />

view, and index, as well as table data from one database to another.<br />

• Manage Authorizations<br />

The Manage Authorizations function enables you to copy, add, or delete user<br />

and object authorizations between and within databases.<br />

• Display Status<br />

The Display Status function enables you to see which processes are running<br />

and the database activity on both local and distributed database<br />

management systems (DBMSs).<br />

Figure 4 is a sample of a DataHub/2 workstation window with database objects<br />

and hosts displayed. The table Q.STAFF has been selected. Using the Copy<br />

action of the Actions pull-down menu, the Q.STAFF table can be copied to any<br />

other RDBMS.<br />

Figure 4. Selecting the Copy Action<br />

Chapter 1. Introduction to DataHub 5


1.2 DataHub Data Flows: Overview<br />

Before implementing DataHub it is important to understand the different data<br />

flows used by the DataHub tool functions. Once you understand the connectivity<br />

issues of DataHub you can better evaluate the related connectivity<br />

implementation tasks; that is, you first need to understand the network elements<br />

and DRDA connectivity, implement DRDA, and then install DataHub on top of the<br />

created DRDA environment. This is the approach we used in this book.<br />

An understanding of data flows will also help you predict and understand<br />

performance issues when using a given tool function in your environment.<br />

Figure 5 depicts DRDA, Remote Data Services (RDS), and DataHub tools<br />

conversation (TC) data flows.<br />

Figure 5. Data Flow Overview<br />

DataHub uses the following connectivity capabilities:<br />

• DRDA flows<br />

6 DataHub Implementation and Connectivity<br />

To process SQL application programming interfaces (APIs), DataHub<br />

requires DRDA connectivity between the DataHub/2 workstation and the SAA<br />

non-OS/2 DRDA hosts. Connectivity between the OS/2 hosts and DataHub/2<br />

workstation uses RDS. The DRDA connectivity between the DataHub/2<br />

workstation and non-OS/2 DRDA hosts could use SAA Distributed <strong>Database</strong><br />

Connection Services/2* (DDCS/2*) at the DataHub/2 workstation or have the<br />

DDCS/2 gateway installed at the server workstation as in Figure 5.<br />

Remember that having a gateway is optional. In the figure we have DDCS/2<br />

in a separate machine. All requests using DRDA flows going out of the<br />

token-ring go through this DDCS/2 gateway. DRDA connectivity between


1.2.1 Run<br />

managed OS/2 hosts and managed non-OS/2 DRDA hosts also requires<br />

DDCS/2, which can reside on any OS/2 on the local area network (LAN).<br />

• RDS flows<br />

RDS connectivity is used to process SQL APIs between the DataHub/2<br />

workstation and the OS/2 hosts. RDS connectivity is also used to connect<br />

the DataHub/2 workstation to the DDCS/2 server gateway. RDS also would<br />

be used if the DataHub database were on a separate machine.<br />

• TC flows<br />

A TC can be established between the DataHub/2 workstation and the hosts<br />

to invoke remote host functions from a DataHub/2 workstation. The following<br />

requirements apply to these TCs:<br />

− Exchange attributes to ensure code-level compatibility.<br />

− Send header information to allow correct trace correlation between the<br />

DataHub/2 workstation and hosts.<br />

− Send function requests to the host components.<br />

− Receive function results from previous function requests.<br />

The Run function is used to execute SQL, MVS JCL, and AS4SQL OS/400<br />

statements at a host.<br />

The statements for Run are in a file at the DataHub/2 workstation and may have<br />

been generated by:<br />

• Another DataHub tool function (Manage Authorizations, for example)<br />

• Another file editor (an ISPF-generated SPUFI file in MVS, for example,<br />

downloaded to the DataHub/2 workstation).<br />

Functions<br />

The main functions of Run and the main user options available are outlined<br />

below.<br />

• SQL statements<br />

Run processes SQL data definition language (DDL), data manipulation<br />

language (DML), or data control language (DCL). The SQL is processed<br />

using DRDA. However, Run does not return to the user the result of the SQL<br />

statement execution. Run returns only the return code telling how the SQL<br />

statement executed at the host.<br />

For SQL processing commands, the DataHub user can specify an integer<br />

value for the COMMIT parameter, to avoid prolonged locks. The value of<br />

integer specifies the number of statements to be processed between<br />

COMMITs.<br />

• JCL statements (MVS only)<br />

All JCL statements must be fixed-length 80-byte records. DataHub/2 will<br />

verify that the JCL statements are targeted for an MVS host. TC flows are<br />

used to process JCL.<br />

Usually JCL statements do not represent large volumes of data and should<br />

not be a problem.<br />

Chapter 1. Introduction to DataHub 7


1.2.2 Utilities<br />

• AS4SQL statements (OS/400 only)<br />

Run can process AS4GRANT and AS4REVOKE commands. Each AS4SQL<br />

statement is parsed and then sent on TC flows for processing.<br />

• Report File<br />

8 DataHub Implementation and Connectivity<br />

Run creates a report file containing the input statements and the results. The<br />

REPORT parameter can have three different values:<br />

− REPORT filename directs the report to the filename data set on the<br />

DataHub/2 workstation. If the file already exists, the new report data will<br />

be appended to the existing data.<br />

− REPORT NO discards the output.<br />

− REPORT DEFAULT causes a default filename to be created.<br />

Data Flows<br />

The data flows involved in the Run function are:<br />

MVS DRDA flows are used to process SQL statements.<br />

TC flows are used to process JCL statements.<br />

VM DRDA flows are used to process SQL statements.<br />

OS/400 DRDA flows are used to process SQL statements.<br />

TC flows are used to process AS4SQL statements.<br />

OS/2 RDS flows are used to process SQL statements.<br />

We present below each DataHub Utilities function to help you understand the<br />

data flow used.<br />

Functions<br />

Load: The DataHub Load function enables you to load data into an existing<br />

relational table. The method that Load uses differs across the various RDBMS<br />

platforms as follows:<br />

DB2 The DB2 LOAD utility is used, and the user can use the DataHub<br />

edit push-button to edit the generated MVS JCL prior to execution.<br />

SQL/DS The DBSU DATALOAD is used.<br />

OS/400 CPYF, CPYFRMDKT, or CPYFRMTAP is used, depending on the<br />

device specified.<br />

OS/2 The SQLGIMP API is used to import data from a file on the<br />

DataHub/2 workstation or the LAN.<br />

Unload: The Unload function extracts data from an existing relational table in<br />

any RDBMS and places it in a sequential file at that RDBMS. For an OS/2 host,<br />

the sequential file can be a LAN file. The method that Unload uses differs across<br />

the various RDBMS platforms as follows:<br />

DB2 There is no unload utility in DB2. The DataHub UNLOAD function<br />

for DB2 generates MVS JCL for DSNTIAUL. Alternatively, the user<br />

can select the DataHub REORG function and use the edit<br />

push-button to edit the generated JCL and specify UNLOAD ONLY<br />

in the JCL. A subset of data can be unloaded.


SQL/DS The DBSU DATAUNLOAD is used. Note that subsetting by row or<br />

column is not supported.<br />

OS/400 CPYF, CPYTODKT, or CPYTOTAP is used, depending on the device<br />

specified. Subsetting of data is not supported.<br />

OS/2 The SQLGEXP API is used to export data. Subsetting of data is not<br />

supported.<br />

Both the Load and Unload functions operate at only one RDBMS at a time; each<br />

function produces a report that includes the statements processed and the<br />

results of the processing.<br />

Backup: The Backup function creates a copy of the table data suitable for<br />

recovery. The Backup function applies only to tables; it does not support views.<br />

The method that Backup uses differs across the various RDBMS platforms as<br />

follows:<br />

DB2 The DB2 COPY utility is used. The default SHRLEVEL applies; if the<br />

table is partitioned, DataHub warns the user that all PARTS will be<br />

copied. The warning is in the form of comments embedded in the<br />

generated JCL. An incremental copy can be taken only if the<br />

DataHub user edits the MVS JCL before execution.<br />

SQL/DS The DataHub Backup function is not supported, and DataHub will<br />

tell the user to use the Unload function.<br />

OS/400 The CL command SAVOBJ is used, but the journal files are not<br />

eligible for DataHub backup.<br />

OS/2 The DataHub Backup function is not supported, and DataHub will<br />

tell the user to use the Unload function.<br />

Recover: The Recover function retrieves a previous backup copy of the data and<br />

applies the changes committed since the backup was taken. Recover is therefore<br />

to the current point in time (PIT). Again, the method that Recover uses is<br />

platform-specific:<br />

DB2 The DB2 RECOVER utility is used, although DataHub does not<br />

support all options. Recover to a previous PIT is only possible by<br />

editing the generated MVS JCL. DataHub Version 1 Release 1 does<br />

not support RECOVER INDEX.<br />

SQL/DS The function is not supported. If DataHub users attempt to recover<br />

an SQL/DS table, DataHub will tell them that they should consider<br />

using the Unload and Load functions.<br />

OS/400 The CL commands RSTOBJ and APYJRNCHG are used. DSPJRN is<br />

also used to determine the start and/or end points for APYJRNCHG.<br />

OS/2 The function is not supported. If DataHub users attempt to recover<br />

an OS/2 table, DataHub will tell them that they should consider<br />

using the Unload and Load functions.<br />

Reorg: The Reorg function reorganizes table and index data to reclaim space<br />

and sort the data. Before execution, the DataHub user can specify FREESPACE<br />

values for DB2 and SQL/DS indexes and DB2 tables and the sequencing index<br />

for OS/400 and OS/2 tables. The method that Reorg uses is platform-specific:<br />

DB2 The DB2 REORG utility is used. The DB2 REORG SORTDATA<br />

parameter is supported by DataHub.<br />

Chapter 1. Introduction to DataHub 9


1.2.3 Manage Authorizations<br />

10 DataHub Implementation and Connectivity<br />

SQL/DS There is no table reorg function in SQL/DS, but REORG index is<br />

supported. The DBSU REORGANIZE is used.<br />

OS/400 The RGZPFM command is used for tables, but there is no index<br />

reorg function in OS/400. The DataHub user can specify which index<br />

is to be used for sequencing the table data.<br />

OS/2 The SQLGREOR API is used. The Reorg index is not supported.<br />

Update Statistics: The Update Statistics function updates RDBMS catalog<br />

statistics. The method that Update Statistics uses is platform-specific:<br />

DB2 The RUNSTATS utility is used. The DataHub user may either specify<br />

ALL to include table and index statistics or edit the JCL before<br />

execution and specify RUNSTATS INDEX to limit processing to an<br />

index.<br />

SQL/DS The UPDATE STATISTICS FOR TABLE function is used, which<br />

includes index statistics. If statistics for all columns are required,<br />

the DataHub user can specify ALL.<br />

OS/400 The DataHub Update Statistics function is not supported, as the<br />

operating system keeps current the catalog statistics.<br />

OS/2 The SQLGSTAT API is used. If the DataHub user specifies ALL,<br />

index statistics are also updated.<br />

Data Flows<br />

The data flows used for the DataHub Utilities function are as follows, by platform:<br />

MVS MVS uses standard DRDA flows to extract tablespace information<br />

from the target DB2 host (this is required to map the user-selected<br />

table to the correct DB2 tablespace). The Run function is then<br />

invoked and uses the TC flows defined earlier.<br />

VM Update Statistics function uses DRDA flows in VM (UPDATE<br />

STATISTICS is an SQL statement in SQL/DS). The other utilities<br />

use TC flows to process DBSU commands.<br />

OS/400 Utilities use TC flows, generate OS/400 Control Language control,<br />

and pass the command parameters to the host in the EMQDA. The<br />

OS/400 commands are executed using the OS/400 CLI.<br />

OS/2 Utilities functions are directly supported by OS/2 <strong>Database</strong> Services<br />

utilities. If the utilities run on the local OS/2 DBMS, they use OS/2<br />

flows to the <strong>Database</strong> Services APIs. Utilities that act on remote<br />

RDBMSs use RDS.<br />

We present below each Manage Authorizations (MA) function to help you<br />

understand the data flow used.<br />

Functions<br />

Add authorizations: From the DataHub/2 workstation, MA can add SQL<br />

authorizations that are processed at any relational database registered to<br />

DataHub. This includes AS4SQL statements introduced by DataHub for OS/400<br />

authorization processing.<br />

Delete authorizations: MA can also delete authorizations at any relational<br />

database host registered to DataHub.


1.2.4 Display Status<br />

Copy object authorizations: Authorizations for an object at a source relational<br />

database can be copied to a target relational database. MA evaluates and<br />

reports any implications attributable to differences in SAA hosts. The same<br />

relational database can be the source and target relational database.<br />

Copy user authorizations: User authorizations can also be copied, from one<br />

relational database to another, and within the same relational database.<br />

Perform authorization translation: During copy, MA uses translation tables to<br />

convert a source authorization to a target authorization. This translation is<br />

required when the source authorization is not supported at the target system.<br />

Processing options: MA provides a number of processing options. The DataHub<br />

user can specify:<br />

• Whether MA should continue or stop processing after an error.<br />

• The commit frequency, as an integer value. The commit frequency<br />

corresponds to the number of SQL statements processed between COMMITs<br />

at the target relational database.<br />

MA provides an implications report option. This is especially useful if the<br />

DataHub user is not experienced in the host environment where this function is<br />

being used. MA reports the following implications:<br />

• For DB2 and SQL/DS, the objects and authorizations affected by cascading<br />

delete requests. (There is no equivalent in OS/2 or OS/400.)<br />

• In OS/400, the implications when PUBLIC and authorization lists are involved,<br />

and when the object is a table or a view.<br />

• For copy between unlike relational databases, each authorization translation.<br />

Data Flows<br />

The data flows involved in the MA function are:<br />

MVS and VM DRDA flows are used for processing SQL and catalog extracts.<br />

OS/400 TC flows are used for processing AS4SQL and catalog extracts.<br />

OS/2 RDS is used for remote OS/2 RDBMSs, and OS/2 DBM APIs are<br />

used for local OS/2 RDBMSs.<br />

The Display Status function enables the user to obtain information about<br />

database activity in OS/400, SQL/DS, and DB2 RDBMSs.<br />

Functions<br />

The Display Status function can be used to:<br />

• List status of all units of work for a user at a given DataHub-managed host<br />

• List status of all RDBMS work at a relational database<br />

• List status of RDBMS work at another system<br />

• Display lock for a given piece of work<br />

• Display work holding or waiting for a lock on a given object.<br />

At the DataHub/2 workstation Display Status coordinates the Display Status<br />

function. It processes the input DISPLAY command, generates the Display<br />

Status flow with the host using the platform common services, and invokes the<br />

Chapter 1. Introduction to DataHub 11


1.2.5 Copy Data<br />

12 DataHub Implementation and Connectivity<br />

Display Status host component to perform the Display Status function at the host<br />

and process the host output.<br />

Data Flows<br />

The Data flows involved in the Display Status function are:<br />

MVS DataHub/2 TC flows are used for conversations with DataHub<br />

<strong>Support</strong>/MVS on an MVS host. The DB2 Instrumentation Facility<br />

Interface (IFI) API is used to retrieve information from DB2.<br />

VM DataHub/2 TC flows are used for conversations with DataHub<br />

<strong>Support</strong>/VM on a VM host. The SHOW operator command is used<br />

to retrieve the information in VM.<br />

OS/400 DataHub/2 TC flows are used for conversations with DataHub<br />

<strong>Support</strong>/400 on an OS/400 host. OS/400′s CL (Control Language)<br />

API is used to retrieve the information from OS/400.<br />

Copy Data (CD) helps you copy relational data from one RDBMS to another, or<br />

within the same RDBMS. The source and target RDBMSs can be like or unlike.<br />

Functions<br />

CD generates the necessary SQL DDL for the target system and copies related<br />

objects at the same time. If requested, authorizations can also be copied.<br />

Tables, views, indexes, and authorizations associated with these database<br />

objects are eligible for CD processing. In addition, multiple tables, single or<br />

multiple views, views on views, underlying base tables, and referential<br />

constraints can be copied.<br />

When the source and target RDBMSs are unlike, DDL translation might be<br />

required. For example, a source AS/400 CHAR column data type definition with<br />

length 4000 is translated to LONG VARCHAR for a target DB2 table definition.<br />

CD copies objects, as well as the authorizations associated with the objects.<br />

Optionally DataHub/2 provides an authorization translation mechanism using<br />

data stored in tables. The objective of these tables is to relate functionally<br />

equivalent authorizations across unlike host RDBMSs.<br />

Three valid combinations of CD execution are available:<br />

• Generate DDL/DCL and stop<br />

• Generate DDL/DCL, process at target, and stop<br />

• Generate DDL/DCL, process at target, and copy and load data.<br />

For each of these operations, the user can optionally store the generated SQL<br />

statements in a file at the DataHub/2 workstation.<br />

CD usage should be considered carefully because copying data from one<br />

RDBMS to another could have a serious impact on your network.<br />

The unload/load method is system-specific and allows the user to select one of<br />

the following options:<br />

• Add the data to an existing table<br />

• Validate that the table is empty, and stop if this is not true.


The data is FETCHed by the target host, then written to the target table (see<br />

Figure 6 on page 13). This fetching process allows BLOCK FETCH to be used<br />

and is more efficient than remote INSERT, which operates one row at a time.<br />

The method used to write to the target table is platform-specific.<br />

CD operation is perceived by the user to be a single step. If the source RDBMS<br />

is on OS/2, this method cannot be used. Remember that OS/2 has implemented<br />

the client DRDA capability. So, for you to copy data from an OS/2 DBM, CD has<br />

to send the request to the OS/2 source instead of the target as it does for all<br />

other platforms. Therefore the OS/2 DBM will use the insert process from the<br />

source to the target.<br />

Figure 6. Copy Data Flows for MVS, VM, and OS/400 Platforms. This figure illustrates<br />

the connectivity between the DataHub/2 workstation and the source and target hosts. It<br />

applies to MVS, VM, and AS/400 hosts.<br />

Data Flows<br />

CD uses different data flows depending on the type of platform involved. Refer to<br />

Figure 6 to understand the following explanation of data flows:<br />

• DataHub/2 workstation<br />

The DataHub/2 workstation CD component coordinates the copy operation.<br />

MA generates the required DCL, and CD calls Run to process DDL and DCL<br />

at the target host. The DDCS/2 application requester (AR) initiates the DRDA<br />

flows with the source and target hosts.<br />

• Source host<br />

The source host acts as an application server (AS) to the DataHub/2<br />

workstation to get the SQL generation. It also acts as server for the copy<br />

operation to the target host.<br />

Chapter 1. Introduction to DataHub 13


• Target host<br />

The target host is both an AS and an AR and executes DataHub functions<br />

using TC flows. As an AS, the target processes SQL to create the objects<br />

from the DataHub/2 workstation AR. CD at the DataHub/2 workstation<br />

processes the command request to initiate the copy operation. The target<br />

host is the AR for the copy and load of data.<br />

Note that CD first verifies with the target that the requested CD operation is<br />

valid, before generating SQL. (The operation is invalid if, for example, the user<br />

specifies that the target table is empty, but it is not.)<br />

The same CD procedure is not possible with the function currently available on<br />

OS/2. The process used in this case is explained in two steps: (1) object<br />

creation and (2) the CD operation itself.<br />

Object creation<br />

14 DataHub Implementation and Connectivity<br />

In the object creation process the RDS flow is used to query the catalog to get<br />

the object definition at the source and to create the objects at the target. Refer to<br />

Figure 7.<br />

Figure 7. Copy Data Involving OS/2 Hosts<br />

• RDS used for catalog queries - if an OS/2 source<br />

For the copy operation to extract the object definition from the source OS/2<br />

host catalog, DataHub uses RDS, which is an OS/2 DBM API function.<br />

• SQL processed using RDS - if an OS/2 target<br />

As with catalog requests, processing of SQL to create the objects at a target<br />

OS/2 host uses RDS. In fact, this function is performed by Run, which the CD<br />

calls to process SQL.


Copy Data<br />

Once the catalog object definition has been extracted from the source, and the<br />

SQL object creation phase has completed at the target, CD copies and loads the<br />

data. When an OS/2-managed host is involved, mechanisms for copying data<br />

vary slightly across the three combinations of CD execution:<br />

• OS/2 source and unlike target<br />

Figure 8 shows the data flows involved in copying data from an OS/2 source<br />

to an unlike, managed, DRDA host.<br />

Figure 8. Copy Data from OS/2 Source to Unlike Target<br />

The DataHub/2 workstation sends the copy request to the source DataHub<br />

<strong>Support</strong>/2 using TC flows. DataHub <strong>Support</strong>/2 then fetches the data from the<br />

local database and inserts it at the target system using DRDA flows through<br />

DDCS/2.<br />

DRDA flows are used for the copy process.<br />

• OS/2 target and unlike source<br />

Figure 9 on page 16 shows the data flows involved in copying data from an<br />

unlike, managed, DRDA host to an OS/2 host.<br />

Chapter 1. Introduction to DataHub 15


Figure 9. Copy Data to an OS/2 Target from Unlike Source<br />

In this case, the DataHub/2 workstation sends the copy command to the<br />

target OS/2, using TC flows. The target fetches the data from the source<br />

database using DRDA flows and finally inserts it into the local database.<br />

DRDA flows are used for this copy process.<br />

• OS/2 source and target<br />

16 DataHub Implementation and Connectivity<br />

Figure 10 shows the data flows involved in copying data from a managed<br />

OS/2 host to another managed OS/2 host.<br />

Figure 10. Copy Data from OS/2 Host to OS/2 Host


1.3 Data Flow Summary<br />

When both source and target are OS/2 hosts, the DataHub/2 workstation<br />

sends the copy command to the target using TC flows. The target fetches the<br />

data from the source using RDS and inserts it into its local database.<br />

Table 1 indicates the outbound protocol used for each function initiated by the<br />

DataHub/2 workstation for the target host environment. The Copy Data and Copy<br />

Data with load functions use several data flows, so we show them in a separate<br />

table (Table 2). For example, for a copy with load from a DB2 host to an OS/2<br />

DBM host, the request is sent to the OS/2 host using the TC flows. The OS/2 host<br />

will copy and load using the DRDA flows.<br />

If you have a DDCS/2 gateway system in your environment, the data flows<br />

between the gateway and the DataHub/2 workstation are always RDS.<br />

Table 1. Data Flows Used by Tool Functions. This table shows the data flows used<br />

between the DataHub/2 workstation and the different hosts.<br />

Function MVS Host VM Host AS/400<br />

Host<br />

Run (SQL) DRDA DRDA DRDA RDS<br />

Run (JCL) TC N/A N/A N/A<br />

Run (AS4SQL) N/A N/A TC N/A<br />

Display RDBMS work TC TC TC N/A<br />

Manage Authorizations DRDA DRDA TC/DRDA RDS<br />

Display Status DRDA DRDA DRDA RDS<br />

Load TC TC TC RDS<br />

Unload TC TC TC RDS<br />

Backup TC N/A TC N/A<br />

Recover TC N/A TC N/A<br />

Reorg TC N/A TC RDS<br />

Update Statistics TC DRDA N/A RDS<br />

Table 2. Data Flows Used by Copy Data<br />

Copy Data Steps MVS Host VM Host AS/400<br />

Host<br />

Object Creation<br />

(non-OS/2)<br />

Copy Data<br />

(non-OS/2)<br />

Object Creation<br />

(OS/2 source)<br />

Copy Data<br />

(OS/2 source)<br />

Copy Data<br />

(OS/2 target)<br />

DRDA DRDA DRDA N/A<br />

TC/DRDA TC/DRDA TC/DRDA N/A<br />

RDS/DRDA RDS/DRDA RDS/DRDA RDS<br />

OS/2 Host<br />

OS/2 Host<br />

TC/DRDA TC/DRDA TC/DRDA TC/RDS<br />

TC/DRDA TC/DRDA TC/DRDA TC/RDS<br />

Chapter 1. Introduction to DataHub 17


18 DataHub Implementation and Connectivity


Chapter 2. Planning for DataHub<br />

You have just been given the job of installing a management system for your<br />

company′s databases. You cannot carry out the project yourself but you do<br />

know who in your organization you need to enlist to be successful. You do know<br />

that there are various host systems, users in several remote locations, and a<br />

network tying them together. Your first job should be to determine which data is<br />

to be managed, who owns the data, and who will manage it. You can then<br />

design a DataHub solution based on your requirements.<br />

To assist you in developing an installation plan that fits your environment, this<br />

chapter describes three typical customer scenarios and defines the DataHub<br />

components and other OS/2 prerequisites required for each. We also discuss<br />

the different skills needed to implement each component.<br />

Once you have defined your plan, you should read the subsequent chapters to<br />

lead you through the implementation.<br />

Before you install DataHub you must provide the correct network connectivity<br />

and an environment that supports the DRDA architecture. Chapter 3, “DRDA<br />

Connectivity” on page 31 shows you how to do this.<br />

As you will see in the scenarios, there is more than one way to implement<br />

DataHub. Some configurations provide more function and require more<br />

components than others. We discuss the OS/2 options in Chapter 4, “DataHub/2<br />

Workstation” on page 105.<br />

After you have installed a network and the DataHub/2 environment, Chapter 5,<br />

“DataHub Tools Conversation Connectivity” on page 151 will help you tie it all<br />

together into a working, DataHub-managed system.<br />

2.1 Customer Business Requirements<br />

DataHub provides management of relational databases. You want to be able to<br />

manage local data, remote data, or combinations of both. Your environment is<br />

comprised of one or more of the following:<br />

• Local management of local data (one to one)<br />

All data is in the same physical location, possibly on a local network of<br />

several workstations. The administrator is responsible for authorizing new<br />

users, backing up the data, and restoring data in the event of a failure.<br />

• Central management of distributed data (one to many)<br />

The administrator in this case looks after the management of all central<br />

databases on different host platforms. Subsets of some data are required for<br />

application testing. The administrator also is responsible for normal backup<br />

and restore as well as user authorization.<br />

• Distributed management of distributed data (many to many)<br />

Several people manage data in remote and central locations. Remote<br />

administrators have full control over local data and some functions at the<br />

central site. Central administrators have full control over all data.<br />

© Copyright IBM Corp. 1993 19


2.2 Sample Scenarios<br />

Let′s see how DataHub is used in each of these environments.<br />

For each sample scenario, we show the required components of DataHub and<br />

their interrelationship. You can then select those components that match your<br />

environment and use or modify the appropriate configurations. We include the<br />

actual configurations we used in our scenarios.<br />

2.2.1 Scenario 1: Local Management of Local Data<br />

20 DataHub Implementation and Connectivity<br />

You have one OS/2 production database and would like to manage it with<br />

DataHub just to see what function DataHub provides. Your database<br />

administrator accesses DataHub from a control point—an OS/2 system running<br />

DataHub/2 as illustrated in Figure 11. The database you manage resides on<br />

what we call a source system. In this scenario, one function you use is Copy<br />

Data for backup requirements. The database tables on the source system are<br />

copied onto a third system, the target system. We select this scenario because<br />

it shows the function of each DataHub component in OS/2 and the relationships<br />

between DataHub/2, DataHub <strong>Support</strong>/2, and the OS/2 <strong>Database</strong> Manager.<br />

Figure 11. Scenario 1: Local Management of Local Data


Components Required<br />

• DataHub/2 workstation<br />

− OS/2 version 2.0<br />

− OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

− LAN Adapter and Protocol <strong>Support</strong> (LAPS)<br />

− DataHub/2<br />

• Managed host<br />

− OS/2 version 2.0<br />

− OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

− LAPS<br />

• Target<br />

− OS/2 version 2.0<br />

− OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

− LAPS<br />

− DataHub <strong>Support</strong>/2.<br />

The managed host is the source of any data tables. DataHub/2 on the DataHub/2<br />

workstation reads the catalog information on the managed host and target<br />

machine. DataHub <strong>Support</strong>/2 on the target uses basic OS/2 <strong>Database</strong> Manager<br />

Remote Data Services (RDS) to communicate with the OS/2 DBM on the<br />

managed host. Notice that the managed host does not have any DataHub<br />

components. We show in Chapter 4, “DataHub/2 Workstation” on page 105 how<br />

this configuration works. For planning purposes, this scenario shows a minimum<br />

configuration and introduces the components and their function. The managed<br />

host cannot be a target of any DataHub functions unless DataHub <strong>Support</strong>/2 is<br />

installed on it.<br />

Function Provided<br />

Full DataHub function is provided within the OS/2 platform with the managed<br />

host as the source for data only.<br />

Skills Required<br />

The person implementing DataHub is probably your local database<br />

administrator. All components are installed on the OS/2 platform, so some PS/2*<br />

LAN skills are required.<br />

Chapter 2. Planning for DataHub 21


2.2.2 Scenario 2: Central Management of Distributed Data<br />

You have several host databases on different platforms (see Figure 12). One<br />

person looks after all database administration functions. You want full DataHub<br />

function across all platforms.<br />

Figure 12. Scenario 2: Central Management of Distributed Data<br />

Components Required<br />

• DataHub/2 workstation (administrator′s workstation)<br />

− OS/2 version 2.0<br />

− OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

− LAPS<br />

− DDCS/2, single-user version<br />

− DataHub/2<br />

22 DataHub Implementation and Connectivity


• Managed host—OS/2<br />

− OS/2 version 2.0<br />

− OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

− LAPS<br />

− DDCS/2, if source or target of cross-platform function<br />

− DataHub <strong>Support</strong>/2<br />

• Managed host—MVS, VM, or OS/400<br />

− Applicable operating system and database<br />

− DataHub <strong>Support</strong><br />

• Network<br />

− VTAM configured for APPC connectivity between nodes. Table 1 on<br />

page 17 shows the protocols used between DataHub/2 and the different<br />

hosts.<br />

Note: You need the multiuser version of DDCS/2 to provide a gateway to DRDA<br />

from the DataHub/2 workstation and host 1. Multiuser DDCS/2 requires an OS/2<br />

<strong>Database</strong> Manager server. In this scenario DDCS/2 is on the managed host 1<br />

machine. It also could have been on another machine.<br />

If you do not have host 1, you could install the single-user version of DDCS/2 on<br />

the DataHub/2 workstation to gain access to the other hosts. Single-user<br />

DDCS/2 does not require an OS/2 <strong>Database</strong> Manager server.<br />

Function Provided<br />

• Full DataHub function is available on all managed hosts.<br />

Skills Required<br />

• The same OS/2 skills are required here as in scenario 1.<br />

• Your database administrator coordinates the definition of database names<br />

and authorizations.<br />

• Network systems programmers help with connectivity issues.<br />

Chapter 2. Planning for DataHub 23


2.2.3 Scenario 3: Distributed Management of Distributed Data<br />

An enterprise has several branches in various locations. Several central<br />

administrators control all central data and have some control of the remote<br />

systems. Each branch uses local and central data. The remote branches are<br />

large enough to require a LAN and host connectivity. The local data is stored on<br />

a LAN server, which is also the communications gateway. Several people in the<br />

branch manage the local data, and one of them is responsible for getting<br />

subsets of local data from the central host. Figure 13 represents one of the<br />

branch installations with three administrators.<br />

Figure 13. Scenario 3: Distributed Management of Distributed Data<br />

Components Required<br />

• Same as scenario 2, plus<br />

24 DataHub Implementation and Connectivity<br />

− LAN server at the DataHub/2 server<br />

− LAN requester at each DataHub/2 workstation<br />

− OS/2 <strong>Database</strong> Manager server on the DataHub/2 server<br />

− OS/2 <strong>Database</strong> Manager clients at each DataHub/2 workstation


− DDCS/2 multiuser version. The DDCS/2 multiuser does not have to be<br />

on the DataHub/2 server machine. However, it must be on the same<br />

machine as the OS/2 <strong>Database</strong> Manager server. If you put the<br />

DataHub/2 database and DDCS/2 on different machines, you need an<br />

OS/2 <strong>Database</strong> Manager server on each.<br />

Refer to Chapter 4, “DataHub/2 Workstation” on page 105 for LAN design<br />

considerations.<br />

Function provided<br />

• Full DataHub function is provided across platforms.<br />

• Ease of maintenance of DataHub/2 workstations. DataHub/2 program code<br />

and the DataHub/2 database are stored on the LAN and DataHub server<br />

machine. Cross-platform host access is centralized in the DDCS/2 gateway.<br />

See Table 16 on page 144 for LAN server dependencies on OS/2 versions.<br />

Skills Required<br />

Each additional level of access and data availability builds on the products<br />

installed and therefore the skills in the previous scenarios. Some LAN<br />

administration skills are required in scenario 3 because of the LAN requester<br />

and LAN server. A help desk may be needed in this environment to assist users<br />

in problem determination.<br />

Chapter 2. Planning for DataHub 25


2.3 Factors Affecting Implementation<br />

Your installation of DataHub will be unique. The following factors determine<br />

what shape your solution will take:<br />

• Number of databases to be managed<br />

• Location of the data<br />

• Type of RDBMS<br />

• Number of administrators<br />

• Location of the administrators<br />

• Centralized or decentralized control<br />

• Existing network.<br />

Once these factors are known, you can start to place the components in the<br />

proper locations in the network and plan the connections. In some cases you<br />

may have to make changes to the existing environment; in other cases you can<br />

configure the components to match the standards adopted by your organization.<br />

2.4 Planning for Implementation: A Checklist<br />

2.4.1 Gather Information<br />

This section covers some of the factors you should consider for your specific<br />

environment. Once you have gathered the information on the data you want to<br />

manage, you can design a system using DataHub that satisfies your<br />

requirements. You can then determine the skills required to implement the<br />

system and establish naming conventions for your DataHub installation.<br />

You need to know which data you want to manage, who will use and manage it,<br />

and how it is distributed.<br />

Data to Be Managed<br />

Your most important design consideration is defining the data to be managed.<br />

First, determine:<br />

• Number and location of databases<br />

• Number and size of tables<br />

• Which tables to manage<br />

26 DataHub Implementation and Connectivity<br />

• Which DataHub functions are required<br />

• When each function is executed<br />

• Whether data is local or enterprise<br />

• Where backup data is stored.<br />

Once you understand the scope of the data, you can determine the size of the<br />

management workload.


2.4.2 Design the Solution<br />

Users<br />

You need to know who uses the data and who manages it. For users of local<br />

data, determine:<br />

• Number and location of users<br />

• Peak periods, time available for management functions.<br />

For DataHub administrators, find out:<br />

• Number and location of current administrators<br />

• What tasks the administrators perform<br />

• Whether management should be local or central.<br />

Knowing who manages the data will help you determine where and how the<br />

DataHub products should be installed to provide the function you need.<br />

Network<br />

The complexity of data management is influenced to a large extent by how the<br />

data is distributed. Large networks probably have data dispersed on multiple<br />

nodes. Your greatest challenge in installing DataHub is connecting the<br />

DataHub/2 workstations to the managed hosts and then interconnecting the<br />

hosts. So you must document your network definitions and include:<br />

• Hosts and the databases they contain<br />

• VTAM and NCP locations, names, and full definitions<br />

• Gateway nodes if any<br />

• Line speed of each link and protocols carried<br />

• LAN types and speeds<br />

• LAN bridge or router locations and segment names.<br />

You should now have a good picture of your network topology. You will refer to<br />

it to provide the APPC and DRDA connectivity DataHub requires.<br />

Once you have gathered all of your data management information, you are in a<br />

position to select and assemble the components required to implement DataHub.<br />

DataHub/2 Workstation<br />

Refer to Chapter 4, “DataHub/2 Workstation” on page 105 for LAN design<br />

considerations.<br />

• Determine topology for:<br />

− Single DataHub/2 workstation<br />

− LAN connections for server-based, multiple DataHub/2 workstations<br />

• Install and configure all prerequisites for DataHub/2.<br />

Chapter 2. Planning for DataHub 27


Managed Hosts<br />

• Install and configure appropriate DataHub <strong>Support</strong> program and<br />

prerequisites.<br />

APPC Network<br />

Provide connections between:<br />

2.4.3 Determine Skills Required<br />

2.4.4 Naming Conventions<br />

28 DataHub Implementation and Connectivity<br />

• DataHub/2 workstations and managed hosts<br />

• Managed host to managed host if source or target of DataHub functions.<br />

You can see now that the skills needed to implement DataHub come from more<br />

than one person in your organization. You probably already have most of the<br />

skills available, but this may be the first project that needs a combination of<br />

skills in separate areas of the business. You need to have all of these areas<br />

contribute, and you need someone to coordinate the different tasks.<br />

Management of the DataHub Implementation Project<br />

In a cross-platform implementation, your database administrator should be the<br />

project coordinator. The coordinator calls on the services of the host, network,<br />

and workstation specialists to bring all the components together. The database<br />

administrator is also the person who can answer most of the data location and<br />

volume questions.<br />

LAN<br />

Regardless of your host configuration, if you have more than one DataHub/2<br />

workstation, they are most likely be on a LAN. Therefore additional LAN skills<br />

are required for requester-to-server connections in setting up the LAN server<br />

definitions for clients, DataHub/2 databases, and program files.<br />

Network<br />

You need appropriate skills in VTAM, SNA LU 6.2, and OS/2 Communications<br />

Manager to define and configure the network from the workstations to the host<br />

databases and between hosts directly. The network definitions are also used in<br />

the workstations.<br />

Host<br />

For each host platform, appropriate skill is required to install and configure the<br />

DataHub <strong>Support</strong> product and connections to the network.<br />

When you use DataHub to manage your databases, you invoke programs on<br />

machines connected by a network. Each of these programs, machines, and the<br />

network must be uniquely identified. You do this by naming them. You also give<br />

names to users, databases, tables, locations, aliases, hosts, DataHub/2<br />

workstations, servers, logical units and their partners, and the network nodes.<br />

Some of these names, such as the names of programs, are fixed. Most are your<br />

choice, but you should have a standard way of selecting names. Refer to<br />

Table 19 on page 154 and Table 20 on page 154 for the conventions we used in<br />

the ITSC. For instance, we prefixed database names with a single character<br />

representing the host type: O, A, V, and M for OS/2, AS/400, VM, and MVS,<br />

respectively.


SNA Network<br />

The SNA LU 6.2 architecture manages the connections from DataHub/2<br />

workstations to databases and between databases. The names defined on any<br />

node are referenced by the partner node. These names include network ID,<br />

destination address, local address, logical unit, transmission service mode,<br />

location name, database name, and transaction program name. Each name<br />

becomes part of the address or conversation when setting up a link.<br />

Your network administrators have probably defined most of the naming<br />

standards discussed below. They are mentioned here to assist you in matching<br />

your definitions to existing items. Refer to Chapter 3, “DRDA Connectivity” on<br />

page 31 and Chapter 5, “DataHub Tools Conversation Connectivity” on<br />

page 151 for more detail on DRDA and SNA definitions.<br />

Network Identification: One physical network may provide a path for data flow<br />

between nodes on different logical networks. The logical network is named by<br />

VTAM or NCP definitions. A physical connection is necessary but does not<br />

ensure the logical link. The network identification is similar to the country line<br />

on an envelope you mail.<br />

Logical Unit, Partner Logical Unit, Transaction Program Name: These are<br />

defined by network and database administrators. A logical unit is a name given<br />

to a user or application. An application at an LU refers to the LU name of the<br />

application to which it is trying to connect. For instance, XXX is an LU on OS/2.<br />

YYY is an LU on OS/400. XXX wants to talk to YYY. YYY is a partner LU from<br />

XXX′s viewpoint. Once connected, XXX wants YYY to run program name ABC.<br />

Transmission Service Mode: The name of the transmission service mode on<br />

each end of the conversation must be the same. It defines the block size and<br />

number of blocks to send or receive at one time. See your network administrator<br />

for this definition.<br />

Location and <strong>Database</strong> Names: Your database administrator should define<br />

these.<br />

Local Area Network<br />

We assume that the hosts and databases already exist and are waiting to be<br />

managed by DataHub. Your DataHub/2 workstation may be an OS/2 system that<br />

is directly attached to your managed host. This connection requires the fewest<br />

configuration changes and prerequisites. If you have one or more DataHub/2<br />

workstations on a LAN, installing DataHub/2 and its prerequisites requires<br />

adding definitions to the existing LAN. This section points out some of these<br />

changes or additions.<br />

You define several items within the LAN environment—on the workstations, LAN<br />

servers, and database servers. Some of the names you give these items are<br />

aliases. An alias is the name of a definition list. The alias is only known at the<br />

place where it is defined. The items in the list point to the real resource, node,<br />

database, or program being defined. A user or application refers to an alias to<br />

access the definition of the resource. Therefore different users could refer to the<br />

same host database by different names. This makes administration or problem<br />

determination difficult unless some convention is used. Refer to Chapter 4,<br />

“DataHub/2 Workstation” on page 105 for more details on aliases.<br />

You should define conventions for:<br />

Chapter 2. Planning for DataHub 29


• Physical resources<br />

− Requester workstation<br />

− LAN server or database server<br />

− Locally administered LAN adapter address<br />

− LAN segments, bridges, routers<br />

• Logical resources<br />

− DataHub/2<br />

- Host and profile names<br />

- RDB names<br />

− OS/2 <strong>Database</strong> Manager<br />

- System directory aliases<br />

- Workstation directory aliases<br />

- <strong>Database</strong> names.<br />

2.5 Summary and Recommendations<br />

30 DataHub Implementation and Connectivity<br />

This chapter presents three scenarios that relate your business requirements to<br />

the DataHub components. You should by now have some idea of how to install<br />

DataHub to satisfy your requirements.<br />

Consider the following recommendations as you read further. Keep them in<br />

mind and relate the recommendations to the appropriate area of each<br />

subsequent chapter.<br />

• Test each component individually as it is installed. Before you attempt an<br />

LU 6.2 session on a newly defined node, see if you can activate a 3270 or<br />

5250 session. This will verify the physical connection.<br />

• Do not wait until DataHub is installed to try your DRDA connections. If<br />

DDCS/2 is one of your prerequisites, use the command line interface to<br />

connect to the remote database. This will verify the OS/2 <strong>Database</strong> Manager<br />

catalogs and logical connections.<br />

• If you are replacing an existing function or connection with new code, make<br />

sure the old function or connection still works before you build onto it.<br />

• DataHub creates log, error, and report files during installation and execution.<br />

If you have any problems, look at these files.<br />

• Look in the product manuals to see which trace facilities are provided.<br />

Learn how to start and read the traces available and get your VTAM, OS/2,<br />

and host database administrator to help with problems.<br />

• Start small. If your implementation looks like scenario 3, run a pilot on a<br />

single DataHub/2 workstation. This approach will provide a functioning<br />

DataHub environment without the additional complexity of clients and<br />

servers. Your LAN administrator can set up the server once the base<br />

DataHub/2 function works. Additional DataHub/2 clients can be added easily.<br />

The first DataHub user will be the most difficult and requires the most work<br />

on your part.<br />

• Keep a log of any problems encountered and the solution. Someone else<br />

may waste time researching a problem you have already solved.


Chapter 3. DRDA Connectivity<br />

3.1 SNA Environment<br />

SNA is the foundation for DRDA implementation. In this chapter we review some<br />

SNA network concepts and definitions. The review will help database<br />

administrators implementing DRDA and DataHub understand the networking<br />

elements they might have to discuss with the network specialists in their<br />

enterprises.<br />

DRDA is the foundation for DataHub implementation. In this chapter, we explain<br />

how to implement DRDA connectivity. We emphasize the key connectivity<br />

parameters that link the different SAA platforms to enable them to communicate<br />

using DRDA.<br />

We present the definitions we used to interconnect the different hosts that are<br />

part of the ITSC DRDA and DataHub network. The ITSC network uses the<br />

token-ring connection to interconnect the SAA platforms. Because each<br />

customer environment differs, you should consider our definitions only as<br />

examples that will help you define your own network. We do not discuss all the<br />

possible physical connections because we only implemented one of them.<br />

We also provide you with a list of actions you need to perform to add another<br />

relational database management system (RDBMS) to your DRDA network.<br />

Finally, we end with some recommendations you may find useful when you<br />

implement your DRDA network.<br />

Before we begin to explain DRDA connectivity, it will be useful to review some<br />

SNA network concepts and definitions.<br />

Physical unit (PU) A PU is not necessarily a physical device. It is part of<br />

a device (programming, circuitry, or both) that<br />

performs control functions for the device. It takes<br />

action during device activation and deactivation, error<br />

recovery, and testing. A host running ACF/VTAM, a<br />

communication controller loaded with an NCP, and a<br />

cluster controller are examples of PUs.<br />

Logical unit (LU) An LU is a user of the SNA network. It is the source<br />

of requests entering the network. An I/O device, an<br />

output printer, an application running in the host (DB2,<br />

DataHub, TSO, CICS, IMS) are examples of LUs.<br />

System Services Control Point (SSCP)<br />

An SSCP is a unit of code within ACF/VTAM that<br />

manages a particular area of the network. Starting and<br />

stopping resources, assisting in the establishment of<br />

connections between LUs, and reacting to network<br />

problems are the tasks performed by the SSCP.<br />

Network Control Program (ACF/NCP)<br />

ACF/NCP is a product installed on the host that can be<br />

used in conjunction with the ACF/SSP (System Service<br />

Program) to generate the Network Control Program<br />

© Copyright IBM Corp. 1993 31


32 DataHub Implementation and Connectivity<br />

(NCP) load module, which is loaded from the host into<br />

a communication controller (3745, 3720, 3725, or 3705).<br />

The NCP controls the lines and the devices attached<br />

to it. It transfers data to and from the device and<br />

handles any errors that occur, including retries after<br />

line errors.<br />

Network Addressable Units (NAUs)<br />

NAUs are elements in the network that ACF/VTAM can<br />

address. LU, PU, and SSCP are elements that<br />

ACF/VTAM can address. VTAM cannot address lines<br />

and connections directly.<br />

Nodes The term “node” is used in two different contexts. In<br />

SNA terms, a node is a physical device, such as a<br />

host processor, a communication controller, a cluster<br />

controller, a terminal, or a printer. Nodes are<br />

categorized according to their capability as either host<br />

nodes, boundary nodes, or peripheral nodes. In VTAM<br />

terms, a node is any point in a network defined by a<br />

symbolic name. There are major nodes and minor<br />

nodes.<br />

Subarea addressing A subarea is a portion of the network that has a<br />

boundary node and includes any peripheral nodes<br />

attached to this subarea node. SSCPs and NCPs are<br />

subarea nodes.<br />

Domain A domain is that part of the network that has been<br />

activated by one particular SSCP. There is only one<br />

SSCP in a domain.<br />

Cross-Domain Resource Manager (CDRM)<br />

A CDRM is a definition in VTAM of the SSCPs in the<br />

other domains. A CDRM definition is necessary for<br />

cross-domain communication.<br />

Cross-Domain Resource (CDRSC)<br />

A CDRSC is the definition in VTAM of a resource that<br />

resides outside the VTAM domain.<br />

Session A session is a logical connection between two NAUs.<br />

SSCP to PU, SSCP to LU, LU to LU, and SSCP to SSCP<br />

are examples of the possible types of sessions that<br />

can be established.<br />

Bind In networking terms, a bind is a particular request unit<br />

that flows from the primary LU to the secondary LU.<br />

The bind is the first request unit to flow on the<br />

LU-to-LU session at session startup. It is via the bind<br />

(and bind negotiation) that the two LUs agree on the<br />

protocols to be used on the session. The information<br />

in the bind comes from the LOGMODE definition.<br />

Logon mode (LOGMODE) The LOGMODE is used to determine the<br />

characteristics of a session. It includes Class of<br />

Service (COS), pacing, RU size, and protocol<br />

information.


3.2 Key Connectivity Parameters<br />

3.2.1 SNA Parameters<br />

Logon Mode Table (MODETAB)<br />

The MODETAB is a table that contains the LOGMODE<br />

entries. There can be multiple MODETABs in an<br />

SSCP.<br />

Request unit (RU) size An RU is the part of a transmission flowing across the<br />

network that contains data for the use of the<br />

destination network node. The node can be an LU,<br />

PU, or SSCP.<br />

The RUSIZES parameter specifies the maximum length<br />

of data one LU can send to its session partner. This<br />

parameter is specified in the LOGMODE definition.<br />

Path information unit (PIU)<br />

The PIU is the basic unit that is transmitted around the<br />

network. It has to hold enough information for it to find<br />

its way through the network to the destination LU,<br />

maintain the protocols that are being used on the<br />

session, and carry the data that the user wants to<br />

send. The PIU consists of:<br />

• Transmission header (TH), which is 26 bytes long<br />

• Request/response header (RH), which is 3 bytes<br />

long<br />

• RU, which is the message or user data.<br />

For more information on the above concepts and definitions you should look at<br />

VTAM and NCP publications listed in the Related Publications section of this<br />

book.<br />

This section explains the key connectivity parameters used to implement a<br />

DRDA network. The parameters we discuss can be divided into three groups:<br />

• SNA parameters<br />

• DRDA parameters<br />

• RDS parameters.<br />

The key parameters discussed below are used to establish the physical and<br />

logical connections in the SNA network.<br />

Token-Ring Address<br />

In a client/server environment, usually the client or the originator of the request<br />

needs to know how to get to the server, and the server needs to know how to<br />

send the answer back to the client. This requires that all systems be defined by<br />

a unique network address.<br />

In a token-ring network, as we used for the ITSC connections, the uniqueness of<br />

the system address is assured by a 12-digit unique number known as the<br />

token-ring address. This address is used to identify a system in the token-ring<br />

network. If the address has already been used, your system will not be able to<br />

connect to the network. Most token-ring installations have a nomenclature in<br />

place to assign a unique address to a machine. Some organizations use the<br />

physical location (for example, a building and a room number) to ensure<br />

Chapter 3. DRDA Connectivity 33


34 DataHub Implementation and Connectivity<br />

uniqueness. Other organizations use the employee ID number, and some others<br />

use the universal address engraved in the token-ring card.<br />

The requester needs to know the server token-ring address to establish an SNA<br />

session over the token-ring network.<br />

IDBLK<br />

VTAM/NCP uses the IDBLK parameter to identify the machine in a switched<br />

network, which is the kind of network we used in the ITSC. The value differs by<br />

type of equipment. For example, an OS/2 system is usually defined with the<br />

value 05D. Together with the IDNUM parameter, the IDBLK parameter forms a<br />

unique identification for VTAM. Both parameters are used in the XID (exchange<br />

identification) negotiation.<br />

IDNUM<br />

VTAM/NCP uses the IDNUM parameter to identify the machine or a PU in a<br />

network.<br />

MAXDATA/MAXBFRU<br />

This parameter specifies the maximum message size that one system can send<br />

to another system. This parameter affects the performance and system<br />

requirements of all participants in a conversation: the sender, the receiver, and<br />

other network components.<br />

Partner Token-Ring Address<br />

This address identifies the physical address of the partner in a token-ring<br />

network. See token-ring address.<br />

Network Name<br />

This parameter identifies the name defined for the network. The same physical<br />

network can provide connectivity for several logical networks. The nodes of a<br />

logical network all have the same network name.<br />

Control Point Name<br />

An SNA control point is an LU that is used to control all sessions between the<br />

different partners. On VM and MVS hosts, we use the SSCP.<br />

Before a session can be established between two applications (for example,<br />

between OS/2 <strong>Database</strong> Manager and SQL/DS), a session must be established<br />

between the local control point or SSCP and the remote control point or SSCP.<br />

The first qualifier of the control point name is the network name.<br />

Logical Unit Name<br />

An LU is a user of the SNA network. Multiple types of LUs (LU type 0, LU type 1,<br />

LU type 2, LU type 6.2, and others) can share the same SNA network. Each<br />

represents a protocol that is used between different devices or programs. The<br />

most important LU type used in a DRDA and DataHub network is the LU type 6.2<br />

(LU 6.2).<br />

LU 6.2 enables peer-to-peer or program-to-program communications. Thus it<br />

enables a client/server relationship between two systems to evolve over time;<br />

that is, a system could be a requester to a specific server for a certain request<br />

and a server for another request.


3.2.2 DRDA Parameters<br />

Advanced program-to-program communications, or APPC, is the programming<br />

interface that implements LU 6.2 protocols. Because one is the protocol and the<br />

other the interface, they are sometimes used interchangeably.<br />

Another architecture is used as a synonym for LU 6.2: advanced peer-to-peer<br />

networking, or APPN. This architecture is based on the LU 6.2 protocols and<br />

permits dynamic update of network ressources.<br />

In an LU 6.2 session, each participant is represented by a generic name:<br />

• LU name (the requester)<br />

• Partner LU name (the server).<br />

The LU name consists of two qualifiers, each made up of a maximum of eight<br />

characters. The first qualifier is the network name and the second qualifier is<br />

the LU name itself. For example, the DataHub/2 server LU name is<br />

USIBMSC.SJA2039I.<br />

Partner Logical Unit Name<br />

The partner logical unit is simply another LU in the network. It defines the<br />

application or the system with which the LU wants to establish a session. The<br />

partner LU also has a two-part name.<br />

Mode Name<br />

The mode is composed of multiple parameters used during the session<br />

negotiation. The key connectivity parameters are RU size and pacing. The<br />

setting of these parameters depend on the applications. To establish a session,<br />

the mode names and parameters must match at both ends.<br />

RU Size: RU size is the parameter that specifies how many bytes of data to<br />

send or receive. Each send/receive request has a header of 29 bytes (TH +<br />

RH). The RU and its header form a PIU.<br />

The RU size parameter is very important for the performance of the DRDA<br />

network. When a system wants to establish a link with another system, the RU<br />

size used for the communication must be determined during session negotiation.<br />

An inappropriate RU size does not affect the session connection, but it could<br />

have a negative impact on session performance.<br />

Pacing: Pacing is another value that is considered during session negotiation.<br />

The pacing parameter determines how many PIUs will be sent in one<br />

transmission before the recipient of the transmission acknowledges receipt.<br />

An inappropriate pacing value does not affect initial session connection, but it<br />

could have a negative impact on session performance.<br />

DRDA connectivity uses the LU 6.2 protocol to establish sessions between<br />

RDBMSs.<br />

Chapter 3. DRDA Connectivity 35


3.2.3 RDS Parameters<br />

RDB Name<br />

Like any other sessions between intelligent partners, each DRDA partner has a<br />

name that is unique in the network. This name is called the RDB name. The<br />

definition of this name varies from platform to platform. The RDB name is used<br />

as the first qualifier of a three-part name for each relational object (for example,<br />

DB2CENTDIST.PROD.ORG). When an RDBMS is not exchanging data with<br />

another RDBMS, the RDB name is not very important. But if you want to<br />

exchange data, you need to define the RDB name and the partner RDB name.<br />

Partner RDB Name<br />

The partner RDB name identifies the name of the other RDBMS in a DRDA<br />

network. This parameter includes an RDB name and an SNA partner LU name.<br />

This SNA partner LU name is required to indicate to the system where the<br />

partner can be found.<br />

Transaction Program Prefix<br />

The transaction program prefix is a 1-byte character prefixed to the transaction<br />

program name. The default value is ′07′ in hexadecimal representation.<br />

Transaction Program Name<br />

The transaction program name (TPN) is the name of the transaction program<br />

that processes the DRDA request on the target host. You should use the default<br />

value of 6DB for all DRDA hosts except VM.<br />

Remote Data Services is an OS/2 <strong>Database</strong> Manager private protocol that<br />

enables OS/2 <strong>Database</strong> Manager clients and servers to access remote OS/2<br />

<strong>Database</strong> Manager servers. Two logical connection protocols can be used to<br />

connect the machines:<br />

• LU 6.2<br />

• NETBIOS.<br />

36 DataHub Implementation and Connectivity<br />

If you use the LU 6.2 protocol, both client and server workstations will need to be<br />

defined in the SNA network as discussed in 3.2.1, “SNA Parameters” on<br />

page 33. Although you do have a choice of protocols to connect OS/2 <strong>Database</strong><br />

Manager clients and servers to OS/2 <strong>Database</strong> Manager servers, the DataHub<br />

implementation requires the use of the LU 6.2 protocol between the DataHub/2<br />

workstation(s) and the managed OS/2 hosts.<br />

If you use the NETBIOS protocol, each OS/2 <strong>Database</strong> Manager client and server<br />

workstation must be assigned a name by the installer. Because this name must<br />

be unique in the NETBIOS network, OS/2 <strong>Database</strong> Manager adds a prefix ($SQL<br />

for a server and #SQL for a requester) and a suffix (00) to the workstation name.<br />

Workstation Name<br />

When an OS/2 <strong>Database</strong> Manager client or server wants to start a NETBIOS<br />

session with an OS/2 <strong>Database</strong> Manager server, the requester must provide the<br />

name of the server. This name is broadcast on the NETBIOS network.<br />

Eventually, the server, which is always listening, acknowledges the request and<br />

tells the requester which token-ring address to use. If the server is not up for<br />

any reason, the local NETBIOS driver code sends a NETBIOS error code of 14<br />

(unknown name) to the requester.


3.2.4 Cross-Reference<br />

Table 3 and Table 4, respectively, show the host terminology for the key SNA<br />

and DRDA connectivity parameters. When possible, the real parameter names<br />

are used. Because in OS/2 there are no commands to define resources, we<br />

indicate the name of the parameter used in the OS/2 windows. You should use<br />

the terminology in Table 3 and Table 4 when you contact the person responsible<br />

for a given platform.<br />

Table 3. SNA Key Connectivity Parameters Cross Reference<br />

Parameter MVS Host VM Host AS/400 Host OS/2 Host<br />

Token-ring address LOCADD (NCP) LOCADD (NCP) ADPTADR Network adapter<br />

address<br />

IDBLK IDBLK IDBLK EXCHID Node ID (hex)<br />

IDNUM IDNUM IDNUM EXCHID Node ID (hex)<br />

MAXDATA MAXDATA MAXDATA MAXDATA Transmit buffer<br />

size<br />

Partner token-ring<br />

address<br />

DIALNO DIALNO ADPTADR LAN destination<br />

address (hex)<br />

Network name NETID NETID LCLNETID Network ID<br />

Control point name SSCPNAME SSCPNAME LCLCPNAME Local node name<br />

Logical unit name APPL APPL LCLLOCNAME Local node name<br />

Partner logical unit<br />

name<br />

APPL APPL RMTLOCNAME Partner logical<br />

unit<br />

Mode name MODEENT MODEENT MODE Transmission<br />

service mode<br />

RU size RUSIZES RUSIZES MAXLENRU Maximum RU size<br />

Pacing PACING PACING INPACING Receive pacing<br />

window<br />

Table 4. DRDA Key Connectivity Parameters Cross Reference<br />

Parameter MVS Host VM Host AS/400 Host OS/2 Host<br />

RDB name LOCATION NAME DBNAME RDB See note<br />

Partner RDB name LOCATION NAME DBNAME RDB See note<br />

Transaction program<br />

prefix<br />

Transaction program<br />

name<br />

See note<br />

LINKATTR RESID TNSPGM See note<br />

Note: Because OS/2 <strong>Database</strong> Manager cannot act as a DRDA server, the DRDA parameters are not<br />

applicable. For the DataHub environment, however, a name must be associated with each OS/2 <strong>Database</strong><br />

Manager database to be managed from a DataHub workstation.<br />

Chapter 3. DRDA Connectivity 37


3.3 The ITSC Network<br />

38 DataHub Implementation and Connectivity<br />

The ITSC DRDA network uses token-ring technology to interconnect different<br />

systems (see Figure 14). It is important to note that both the MVS and VM hosts<br />

reside on the same physical machines, although in the logical view of our<br />

network they seem far away from each other (New York and Toronto). From a<br />

connectivity point of view, lack of proximity does not make any difference<br />

because both hosts are accessible through one token-ring connection on the<br />

3745 VTAM NCP. From the DataHub/2 workstation, the requests can be routed to<br />

the Toronto system by another path.<br />

Figure 14. ITSC Network Configuration<br />

Table 5 on page 39 and Table 6 on page 39 show the values for the key SNA<br />

and DRDA connectivity parameters for each non-OS/2 host on the ITSC network.<br />

We need to know these values to interconnect all the different hosts.


Table 5. The ITSC Network: SNA Key Connectivity Parameter Values, non-OS/2 Hosts<br />

Parameter New York<br />

MVS Host (DB2)<br />

Headquarters<br />

Toronto<br />

VM Host (SQL/DS)<br />

Large Car Dealer<br />

San Jose<br />

AS/400 Host<br />

Large Car Dealer<br />

Token-ring address 400008210200 400008210200 400052047158<br />

IDBLK Not applicable Not applicable 056<br />

IDNUM Not applicable Not applicable A2054<br />

Network name USIBMSC USIBMSC USIBMSC<br />

Control point name SCG20 SC19M SJAS400A<br />

Logical unit name LUDB23 LUSQLVMA SJA2054I<br />

Table 6. The ITSC Network: DRDA Key Connectivity Parameter Values, non-OS/2 Hosts<br />

Parameter New York<br />

MVS Host (DB2)<br />

Headquarters<br />

Toronto<br />

VM Host (SQL/DS)<br />

Large Car Dealer<br />

RDB name DB2CENTDIST SQLLDLR01 SJ400DLR1<br />

Transaction program<br />

prefix<br />

Transaction program<br />

name<br />

X′07′ X′07′ X′07′<br />

6DB SQLVMA 6DB<br />

San Jose<br />

AS/400 Host<br />

Large Car Dealer<br />

Table 7 and Table 8 show the values for the key SNA and RDS connectivity<br />

parameters for the OS/2 systems.<br />

Table 7. The ITSC Network: SNA Key Connectivity Parameter Values, OS/2 Hosts<br />

Parameter DataHub/2<br />

Workstation<br />

OS/2 Host<br />

(DB2/2)<br />

Montreal<br />

OS/2 Host<br />

(OS/2 DBM)<br />

Car Dealer<br />

Paris<br />

OS/2 Host<br />

(OS/2 DBM)<br />

Car Dealer<br />

Rio de Janeiro<br />

OS/2 Host<br />

(DB2/2)<br />

Car Dealer<br />

Token-ring address 400052047135 400052047122 400052047157 400052047116<br />

IDBLK 05D 05D 05D 05D<br />

IDNUM A2039 A2018 A2050 A2019<br />

Network name USIBMSC USIBMSC USIBMSC USIBMSC<br />

Control point name SJA2039I SJA2018I SJA2050I SJA2019I<br />

Logical unit name SJA2039I SJA2018I SJA2050I SJA2019I<br />

Table 8. The ITSC Network: RDS Key Connectivity Parameter Values, OS/2 Hosts<br />

Parameter DataHub/2<br />

Works.<br />

OS/2 Host<br />

(DB2/2)<br />

Montreal<br />

OS/2 Host<br />

(OS/2 DBM)<br />

Car Dealer<br />

Paris<br />

OS/2 Host<br />

(OS/2 DBM)<br />

Car Dealer<br />

Rio de Janeiro<br />

OS/2 Host<br />

(DB2/2)<br />

Car Dealer<br />

Workstation name DHSERVER MTLDBM1 PARDBM1 RIODBM1<br />

Now that we know all the SNA, DRDA, and RDS key connectivity parameters,<br />

let′s start customizing the network.<br />

Chapter 3. DRDA Connectivity 39


3.4 MVS Definitions<br />

3.4.1 SNA Definitions<br />

40 DataHub Implementation and Connectivity<br />

Note: In our network, we customize our environment to connect any-to-any,<br />

which may not be realistic. In reality, you would probably interconnect one<br />

central repository to all the other platforms. Therefore, one platform would need<br />

to know only one partner and not five as in our implementation.<br />

In this section we show how the key connectivity parameters have been defined<br />

in the MVS environment for DRDA connectivity. We also explain the relationship<br />

between the different components.<br />

This section covers the key SNA definitions you will have to implement in<br />

MVS/VTAM and NCP. It is not within the scope of this book to explain how to<br />

implement a VTAM network. You should work in collaboration with your network<br />

specialist to understand the existing VTAM environment in your enterprise. You<br />

need to discuss with your network specialist the modifications and extensions<br />

DataHub will require in the VTAM environment. The examples we provide in this<br />

section are extracts of the ITSC network definitions. The ITSC network we refer<br />

to in this section is a unique network named USIBMSC. We did not implement<br />

interconnected networks. If you have to implement interconnected networks, the<br />

definitions will be slightly more complex and will require more planning. Refer to<br />

the VTAM Network Implementation Guide and VTAM Resource Definition<br />

Reference for more information on network implementations.<br />

Cross-Domain Definitions<br />

A multiple-domain network (like USIBMSC) contains more than one SSCP.<br />

Control of the resources in the network is divided among the SSCPs. Defining<br />

the VTAM network involves identifying the domains and defining the resources in<br />

those domains to the SSCPs.<br />

A cross-domain resource manager (CDRM) is that part of an SSCP that supports<br />

cross-domain session setup and takedown. Before LUs in one domain can have<br />

cross-domain sessions with LUs in another domain, an SSCP-SSCP session must<br />

be established between the SSCPs of the two domains. Therefore, VTAM must<br />

know about all CDRMs with which it will communicate. You must define to VTAM<br />

its own CDRM (host CDRM) and all other CDRMs (external CDRMs) in the<br />

network. Thus to have a session between two SSCPs, you must have two<br />

CDRMs defined to each VTAM: one for the host and one for the external CDRM.<br />

Figure 15 on page 41 is an example of the CDRM definitions we implemented in<br />

our network.


******************************************************************<br />

* CROSS DOMAIN RESOURCE MANAGER FOR WTSCMXA *<br />

******************************************************************<br />

VBUILD TYPE=CDRM<br />

USIBMSC NETWORK NETID=USIBMSC<br />

*<br />

SC02M CDRM SUBAREA=02, MVS FLOOR SYSTEM (WTSCMXA)<br />

CDRDYN=YES,CDRSC=OPT,ISTATUS=ACTIVE<br />

SC19M CDRM SUBAREA=19, WTSCVMXA<br />

CDRDYN=YES,CDRSC=OPT,ISTATUS=ACTIVE<br />

SCG20 CDRM SUBAREA=20, WTSCPOK<br />

CDRDYN=YES,CDRSC=OPT,ISTATUS=ACTIVE<br />

SC25M CDRM SUBAREA=25, WTSC3090<br />

CDRDYN=YES,CDRSC=OPT,ISTATUS=ACTIVE<br />

SC26M CDRM SUBAREA=26, SJMVSESA<br />

CDRDYN=YES,CDRSC=OPT,ISTATUS=ACTIVE<br />

Figure 15. Cross-Domain Resource Manager Definitions in MVS/VTAM<br />

Resources controlled by VTAM in another domain are called cross-domain<br />

resources (CDRSCs). You do not have to define CDRSCs. VTAM can dynamically<br />

create the definition statements to represent resources that reside in other<br />

domains. Figure 16 is an extract of the CDRSC definitions we made in our<br />

network. Note that all external resources have not been defined; they will be<br />

created dynamically by VTAM.<br />

******************************************************************<br />

* CROSS DOMAIN RESOURCES *<br />

******************************************************************<br />

VBUILD TYPE=CDRSC<br />

NETWORK NETID=USIBMSC<br />

******************************************************************<br />

* CROSS DOMAIN RESOURCES ON WTSCVMXA *<br />

******************************************************************<br />

*<br />

LUSQLDB2 CDRSC CDRM=SC19M, SQL ON WTSCVMXA<br />

ISTATUS=ACTIVE<br />

Figure 16. Cross Domain Resource Definitions in MVS/VTAM<br />

Token-Ring Address<br />

Each participant in a token-ring network must have an address. The address is<br />

used to identify the element in the network. For example, to get into an SNA<br />

network you always leave the token-ring network through a gate, which is the<br />

NCP. Because the NCP is part of the token-ring, it must have an address<br />

associated with it.<br />

We are using the ITSC network as an example to elucidate addressing. In our<br />

environment we could have left the token-ring through an NCP in the IBM 3745<br />

Communication Controller to reach the SNA network and consequently access<br />

mainframe applications.<br />

The support for the token-ring network in an NCP is given by the NCP<br />

Token-Ring Interface (NTRI), which is a standard function of the NCP.<br />

Chapter 3. DRDA Connectivity 41


42 DataHub Implementation and Connectivity<br />

The 3745 attaches to the token-ring network through the Token-Ring Network<br />

Attachment Subsystem (TRSS). The TRSS consists of Token-Ring Adapters<br />

(TRAs). A TRA consists of a Token-Ring Multiplexor (TRM) and Token-Ring<br />

Interface Couplers (TICs).<br />

Figure 17 shows the definition that was made in the NCP for TIC # 1. The TIC<br />

address assigned is 400008210200. There are two ways of assigning a token-ring<br />

address: universally or locally administered. A universally administered address<br />

(UAA) is engraved on the machine by the manufacturer. The locally<br />

administered address (LAA) is assigned by a network administrator and should<br />

be unique. Usually the LAA is the more meaningful way of assigning an address<br />

because it can be coded to represent a building, floor, room, and workstation,<br />

and you could identify the resource by looking at the code. In our case our local<br />

administrator told us to use address 400008210200 to identify the NCP.<br />

***********************************************************************<br />

* PHYSICAL GROUP FOR NTRI TIC #1 (PORT ADDRESS 1088) *<br />

***********************************************************************<br />

GT21P01 GROUP ECLTYPE=(PHY,PERIPHERAL), PHYSICAL GROUP *<br />

NPACOLL=NO, *<br />

LNCTL=SDLC<br />

LT21TIC1 LINE ADDRESS=(1088,FULL), TIC ADDRESS *<br />

LOCADD=400008210200, ADDRESS OF 1ST TIC *<br />

MAXTSL=2044, TRANSMIT FRAME SIZE *<br />

ADAPTER=TIC2, ADAPTER TYPE 2 16/4 *<br />

TRSPEED=16, SPEED *<br />

PORTADD=1, PHYSICAL PORT ADDRESS OF 1ST TIC *<br />

RCVBUFC=4095 MAX RECEIVE BUFFER SIZE<br />

*<br />

PT21TIC1 PU ISTATUS=ACTIVE<br />

*<br />

SC21TIC1 LU ISTATUS=INACTIVE<br />

***********************************************************************<br />

Figure 17. LAN Address for the Network Control Program<br />

Once the NCP (3745 Communication Controller) has the LOCADD 400008210200<br />

assigned to it, every time an RU or message addressed to the NCP passes<br />

through it in the token-ring, NCP takes it off the token-ring and moves it into<br />

memory. The NCP does not reroute the message or RU to an adjacent element<br />

in the token-ring; it looks at the TH to find out the destination address of the<br />

message or RU and reroutes it to the requested address. The destination<br />

address could be, for example, an SNA address outside the ring.<br />

MAXDATA and MAXBFRU<br />

The MAXDATA parameter included at NCP generation specifies the maximum<br />

PIU size that VTAM can deliver to the NCP (see Figure 18 on page 43). It<br />

indirectly limits the message size or RU that can be transmitted across the<br />

network.<br />

In our case the MAXDATA specified is 5000, which means that VTAM could<br />

deliver data or a message of up to 4971 bytes (5000 - (26(TH) + 3(RH)) to the<br />

NCP.


***********************************************************************<br />

* PCCU SPECIFICATIONS - ACF/VTAM OS/MVS(VM) 3745 *<br />

***********************************************************************<br />

SC1SCGP PCCU AUTODMP=NO, AUTOMATICALLY TAKE DUMP X<br />

AUTOIPL=NO, LOAD NCP AFTER FAILURE OR DUMP X<br />

AUTOSYN=YES, USE THE ALREADY LOADED NCP IF OK X<br />

BACKUP=YES, RESOURCE TAKEOVER PERMITTED X<br />

CUADDR=922, CHANNEL-ATTACHMENT ADDRESS X<br />

DUMPDS=NCPDUMP, DUMP DATA SET X<br />

DUMPSTA=922-S, DUMP STATION X<br />

LOADSTA=922-S, LOAD STATION X<br />

MAXDATA=5000, MAXIMUM PIU SIZE X<br />

NETID=USIBMSC, NETWORK IDENTIFIER X<br />

SUBAREA=20, HOST SUBAREA ADDRESS X<br />

VFYLM=YES VERIFY NCP LOAD MODULE NAME<br />

Figure 18. MAXDATA Definition for the Network Control Program<br />

The NCP has the capability to deliver data to VTAM. The data size that VTAM<br />

might receive is specified indirectly through the definition of its buffer<br />

characteristics. The MAXBFRU parameter indicates the number of buffers that<br />

VTAM reserves to receive data. The UNITSZ specifies the VTAM buffer size. In<br />

Figure 19, MAXBFRU=32 and UNITSZ=384, which means that a message of up<br />

to 12286 bytes could be delivered to this VTAM.<br />

***********************************************************************<br />

* HOST DEFINITIONS (3745-V5R3) *<br />

***********************************************************************<br />

SC1SCGH HOST INBFRS=10, NCP BUFFERS FOR EACH DATA TRANSFER *<br />

MAXBFRU=32, # VTAM BUFS FOR RCV DATA *<br />

UNITSZ=384, VTAM IO BUFFER SIZE (IOBUF) *<br />

BFRPAD=0, VTAM REQUIRED *<br />

NETID=USIBMSC, NETWORK WHERE THIS HOST IS *<br />

SUBAREA=20 SSCP SUBAREA NUMBER (HOST)<br />

Figure 19. MAXBFRU and UNITSZ Definition for the Network Control Program<br />

Note that the VTAM and NCP definitions are used together. Some VTAM<br />

definitions are built at NCP generation and some NCP definitions are built at<br />

VTAM generation. This mechanism allows activation of definitions without<br />

regenerating NCP or VTAM.<br />

The NCP is loaded in the Communication Controller. VTAM is started up through<br />

a startup procedure that specifies which libraries (VTAMLIB, VTAMLST,<br />

VTAMOBJ, NCPLOAD, and NCPDUMP) VTAM will use. VTAM startup parameters<br />

are specified in a library member called ATCSTRXX, where xx is the suffix as<br />

specified in the LIST parameter on the START command (see Figure 20).<br />

S NET,,,(LIST=00).<br />

Figure 20. VTAM Startup Command<br />

The startup command in Figure 20 would activate the parameters in the<br />

ATCSTR00 member (see Figure 21 on page 44).<br />

Chapter 3. DRDA Connectivity 43


44 DataHub Implementation and Connectivity<br />

*********************************************************************<br />

* VTAM START LIST - ATCSTR00 - *<br />

*********************************************************************<br />

NOPROMPT, *<br />

SSCPID=5411, *<br />

SUPP=NOSUP, *<br />

HOSTSA=20, *<br />

NETID=USIBMSC, *<br />

SSCPNAME=SCG20, *<br />

HOSTPU=SCG20PU, *<br />

CONFIG=00, *<br />

MAXSUBA=255, *<br />

NOTRACE,TYPE=VTAM, *<br />

IOBUF=(650,384,12,,32,32), *<br />

BSBUF=(975,,,,,27), *<br />

APBUF=(16,,2,,1,3), *<br />

LPBUF=(100,,0,,60,60), *<br />

LFBUF=(100,,0,,24,10), *<br />

WPBUF=(350,,0,,25,10), *<br />

SFBUF=(60,,0,,32,10), *<br />

SPBUF=(60,,0,,32,10), *<br />

CRPLBUF=(160,,13,,80,80), *<br />

IOINT=180<br />

Figure 21. SSCPNAME=SCG20 VTAM Startup Parameter List: ATCSTR00<br />

Partner Token-ring Address<br />

In the previous discussion we addressed the case where an element in the<br />

token-ring network wanted to send a message or RU out of the network to reach<br />

a mainframe application. Now, let′s look at the opposite case, that is, when one<br />

SNA application wants to establish a session with an element in the token-ring<br />

network.<br />

If one application wants to access a partner on the token-ring network, for<br />

example, an AS/400, VTAM needs to know this partner by its name<br />

. and by the token-ring address. VTAM can be<br />

informed of the partner token-ring address by means of the DIALNO parameter<br />

in the PATH macro.<br />

Figure 39 on page 63 shows the definition we used in our environment to inform<br />

VTAM of the required LU name and the partner token-ring address.<br />

DIALNO=400052047158 and the LU name SJA2054I were specified. Now let′s<br />

see what was defined on the AS/400 side. First, as can be seen in Figure 38 on<br />

page 61, the DIALNO value specified in the PATH macro must match the<br />

ADPTADR value specified in the AS/400 line description. (Refer to Figure 41 on<br />

page 64 for the AS/400 token-ring line definition. Also, refer to Figure 42 on<br />

page 65 to see how we informed AS/400 of the NCP address in case the AS/400<br />

application wanted to leave the token-ring through the NCP gate.)


Network Name<br />

When defining an element in a network, it is important to specify to which<br />

network the element belongs. You might have different networks communicating,<br />

and a network ID is required to identify the correct network. The network name<br />

is specified as the NETID parameter to VTAM (see Figure 21 on page 44). The<br />

parameter tells VTAM to which network it belongs. The NETID parameter is also<br />

used in the NCP definitions. It tells the NCP to which network it belongs. In our<br />

case, the network name is USIBMSC (see Figure 18 on page 43 and Figure 19<br />

on page 43).<br />

Control Point Name<br />

This is the SSCP name, which is actually the ACF/VTAM name. In our network<br />

the control point name has been defined through the SSCPNAME parameter.<br />

Refer to Figure 21 on page 44 where you can see the SSCPNAME specified as<br />

SCG20. You can see that this SSCP (VTAM) belongs to subarea 20. This is key<br />

information that identifies SSCP SCG20. The subarea parameter is used to tie<br />

this SSCP to other definitions. For example, the PCCU definitions in Figure 18 on<br />

page 43 are tied to SSCP SCG20 because they refer to subarea 20.<br />

Logical Unit Name<br />

As stated previously the LU name can be an application program. In the MVS<br />

environment, DB2 implements the DRDA architecture as part of the Distributed<br />

Data Facility (DDF). DDF uses VTAM to perform distributed database<br />

communications on behalf of DB2. So, to enable DB2 access throughout the<br />

network we need to assign to it an LU name as well as a network ID. Refer to<br />

Figure 22 on page 46 for the LUDB23 DB2 definition. Note that when the<br />

ACBNAME parameter is not coded, the network-unique name (the name of the<br />

APPL statement) is used as the ACBNAME. The ACBNAME parameter is needed<br />

when application programs with the same APPLID are activated in multiple hosts<br />

and are to enter in a cross-domain session. The ACBNAME parameter is also<br />

required when terminal users in the network specify with which application<br />

program in which host they want to establish a session.<br />

The parameter DLOGMOD=IBMRDB specifies the log mode entry to select for<br />

use from the log mode table AGWTAB. The parameter SECACPT=ALREADYV<br />

specifies that the level of conversation security is set to “already verified,” which<br />

is the recommended value.<br />

Chapter 3. DRDA Connectivity 45


VBUILD TYPE=APPL<br />

*---------------------------------------------------------------------*<br />

* POUGHKEEPSIE DB2 SUBSYSTEM VTAM DEFINITION *<br />

*---------------------------------------------------------------------*<br />

*<br />

LUDB23 APPL ACBNAME=LUDB23, APPCMD CAPABILITY ENABLED X<br />

APPC=YES, X<br />

AUTH=(ACQ), ACCESS TO VTAM FUNCTIONS X<br />

AUTOSES=10, X<br />

DLOGMOD=IBMRDB, X<br />

DMINWNL=25, X<br />

DMINWNR=25, X<br />

DSESLIM=50, MAX NUMBER OF SESSIONS IN USE X<br />

EAS=509, X<br />

ENCR=NONE, X<br />

MODETAB=AGWTAB, X<br />

PARSESS=YES, X<br />

SECACPT=ALREADYV, X<br />

SONSCIP=NO, X<br />

SRBEXIT=YES, SRB PROCESSING IN EXIT ROUTINES X<br />

VERIFY=NONE, X<br />

VPACING=2, X<br />

VTAMFRR=NO<br />

Figure 22. LU Definition for DB2 in the MVS VTAM Environment<br />

Partner Logical Unit Name<br />

The partner LU is simply another LU in the network. It is defined in the same<br />

way the LU name is defined. In our network we defined several LUs that were<br />

used as partner LUs when establishing LU-to-LU sessions. Refer to Figure 30 on<br />

page 52 for the AVS LU definition for the SQL/DS environment. The AS/400 LU<br />

name definition is shown in Figure 39 on page 63.<br />

Mode Name<br />

The session characteristics are defined through logon mode or LOGMODE. One<br />

of the LOGMODEs we used is IBMRDB (see Figure 23).<br />

TITLE ′ DB2IBM′<br />

IBMRDB MODEENT LOGMODE=IBMRDB, *<br />

COS=BACKHI, *<br />

PSNDPAC=X′08′, PRIMARY SEND PACING COUNT *<br />

SSNDPAC=X′08′, SECONDARY SEND PACING COUNT *<br />

SRCVPAC=X′08′, SECONDARY RECEIVE PACING COUNT *<br />

RUSIZES=X′8989′, RU SIZE 4096 IN 4096 OUT *<br />

FMPROF=X′13′, *<br />

TSPROF=X′07′, *<br />

PRIPROT=X′ B0′, *<br />

SECPROT=X′ B0′, *<br />

COMPROT=X′ D0B1′, *<br />

ENCR=B′0000′, *<br />

PSERVIC=X′060200000000000000000300′<br />

Figure 23. IBMRDB LOGMODE<br />

46 DataHub Implementation and Connectivity


3.4.2 DRDA Definitions<br />

RU Size: The RU size defines the maximum amount of data that can be sent to<br />

the session partner in each send operation. As stated before, the RU size is one<br />

of the elements that determines the PIU size. You can see in Figure 23 that in<br />

our environment the RU size was specified as X′8989′ (that is, 8 x 2**9 = 8 x 512<br />

= 4K in each direction on the session), which is the recommended value in a<br />

DRDA environment.<br />

Pacing: Pacing is an important concept when considering the performance of<br />

the network. The concept of pacing is that one LU sends a specified number of<br />

RUs and then waits for a “pacing response” from the receiving LU before it<br />

sends any more data (RUs).<br />

Session pacing can be established by means of the LOGMODE definitions used<br />

in the bind when the session is set up or specified on the APPL definition in<br />

VTAM. Session pacing applies in each direction on the session independently,<br />

that is, each LU has a send count and a receive count. Refer to Figure 22 on<br />

page 46 for details of the session pacing definition used in the LUDB23 APPL<br />

definition in VTAM. VPACING=2, as specified in our environment, is the<br />

recommended value. Remember that a session is started by one LU 6.2<br />

application sending a BIND command. The sender of the BIND is called the<br />

primary LU for that session; the receiver of the BIND is called the secondary LU.<br />

The BIND sent by the primary LU contains certain session parameters (for<br />

example, RU sizes, protocols, pacing), and the secondary LU can suggest<br />

another set of session parameters. In the LU 6.2 architecture, this is called a<br />

negotiable BIND. The primary LU can either accept the session parameter<br />

suggested by the secondary LU or terminate the session.<br />

In this section we show how the DRDA key connectivity parameters have been<br />

defined.<br />

RDB Name<br />

The globally unique name for a relational database is RDB name. The<br />

terminology used in DB2 for RDB name is LOCATION NAME. In a DB2<br />

environment the location name and password of your local DB2 subsystem are<br />

defined in the bootstrap data set (BSDS) by using either the installation panels<br />

or the Change Log Inventory Utility. The LOCATION NAME and the password<br />

that is used to identify the local DB2 and its associated LU name are kept in the<br />

BSDS. Refer to Figure 24 on page 48 for a schematic view of the parameters<br />

involved when connecting DB2 to VTAM.<br />

Chapter 3. DRDA Connectivity 47


48 DataHub Implementation and Connectivity<br />

Figure 24. DB2 to VTAM Connection<br />

The LOCATION NAME is equivalent to the RDB_NAME in non-DB2 SAA database<br />

managers and is used to identify DB2 to the system. The LOCATION NAME is<br />

referred to by other subsystems that need to access this particular DB2. This<br />

mechanism allows a database manager to connect to a DB2 subsystem without<br />

referring to its NETID.LUNAME. The LOCATION NAME must be unique in the<br />

network and can be up to 16 bytes long.<br />

The LUNAME is the name by which VTAM can recognize the local DB2<br />

subsystem as a VTAM application. It is used to define DDF in the VTAM APPL<br />

statement. You can also define the DB2 password in the APPL PRTCT<br />

statement. This password is optional. Refer to Figure 24. When DDF is started,<br />

it retrieves the password from the BSDS and forwards it to VTAM for<br />

confirmation. This process validates the connection and prevents unauthorized<br />

LU names from getting into the network.<br />

The LUNAME must be unique within the network and can be up to 8 bytes long.<br />

The LUNAME is also kept in the BSDS.<br />

We defined the LUNAME for DB2 location DB2CENTDIST as LUDB23 (see<br />

Figure 22 on page 46).<br />

DRDA Partners<br />

Once the local DB2 names have been defined you need to define the DRDA<br />

remote server as well as the remote requester subsystems by populating the<br />

communications database (CDB) tables. The CDB tables describe the<br />

connections between the local DB2 and the other subsystems. The following five<br />

tables are part of the CDB:<br />

SYSIBM.SYSLOCATIONS This table maps the LOCATION name to the<br />

LUNAME and TPN. It must contain at least one<br />

row for each remote server. Requesters are not<br />

required to be defined.


SYSIBM.SYSLUNAMES This table defines the security and mode<br />

requirements for conversations. This table has a<br />

row for each LUNAME associated with the remote<br />

system you want to access.<br />

SYSIBM.SYSUSERNAMES This table is used for inbound and outbound<br />

translations.<br />

SYSIBM.SYSMODESELECT This table maps applications to specific VTAM<br />

logon modes.<br />

SYSIBM.SYSLUMODES This table defines the LU 6.2 session limits.<br />

To define DB2 as a requester, that is, to have DB2 access remote servers, you<br />

must include at least two rows for each remote database: one row in the<br />

SYSIBM.SYSLUNAMES table and one row in the SYSIBM.SYSLOCATIONS table.<br />

Others rows can be added to exploit the whole CDB capability.<br />

To define DB2 as a server, that is, to allow other subsystems to access DB2, you<br />

can use the defaults. If you want to:<br />

• Use inbound translation<br />

• Associate a specific logon mode witha specific application<br />

• Specify any other special property for a requesting LUNAME,<br />

you should populate the CDB according to your requirements.<br />

For our network we defined in the CDB at location name DB2CENTDIST the<br />

remote servers and requesters as shown in Figure 25 through Figure 29 on<br />

page 50.<br />

---------+---------+---------+---------+---------+---------+<br />

LOCATION LOCTYPE LINKNAME LINKATTR<br />

---------+---------+---------+---------+---------+---------+<br />

DB2CENTDIST LUDB23<br />

DB2REGNDIST01 LUDB2B<br />

SQLLDLR01 LUSQLDB2 SQLVMA<br />

SJ400DLR1 SJA2054I<br />

---------+---------+---------+---------+---------+---------+<br />

Figure 25. SYSIBM.SYSLOCATIONS CDB Table. Two DB2 location names,<br />

DB2CENTDIST and DB2REGNDIST01, are defined. The respective LUNAMEs are LUDB23<br />

and LUDB2B. The SQL/DS location SQLLDLR01 has LUSQLDB2 as the LUNAME and<br />

SQLVMA as the TPN. The AS/400 location SJ400DLR1 has as its LUNAME SJA2054I.<br />

Chapter 3. DRDA Connectivity 49


50 DataHub Implementation and Connectivity<br />

---------+---------+---------+---------+---------+---------+---------+--<br />

LUNAME SYSMODENAME USERSECURITY ENCRYPTPSWDS MODESELECT USERNAMES<br />

---------+---------+---------+---------+---------+---------+---------+--<br />

LUDB23 A N Y I<br />

LUDB2B A N Y I<br />

LUSQLDB2 A N Y I<br />

SJA2054I A N N I<br />

C N N<br />

---------+---------+---------+---------+---------+---------+---------+--<br />

Figure 26. SYSIBM.SYSLUNAMES CDB Table. This table defines the LUNAMEs and<br />

indicates that inbound translation for LUNAMEs LUDB23, LUDB2B, LUSQLDB2, and<br />

SJA2054I is required. All other LUNAMEs will be RACF checked.<br />

---------+---------+---------+---------+---------<br />

TYPE AUTHID LUNAME NEWAUTHID PASSWORD<br />

---------+---------+---------+---------+---------<br />

I CHAPUT LUDB2B CHAPUT<br />

I STDB2A LUDB2B STDB2A<br />

I STDB2B LUDB2B STDB2B<br />

I STDB2C LUDB2B STDB2C<br />

I STDB2D LUDB2B STDB2D<br />

I STDB2E LUDB2B STDB2E<br />

I CHAPUT LUSQLDB2 CHAPUT<br />

I STSQLA LUSQLDB2 STDB2A<br />

I STSQLB LUSQLDB2 STDB2B<br />

I STSQLC LUSQLDB2 STDB2C<br />

I STSQLD LUSQLDB2 STDB2D<br />

I STSQLE LUSQLDB2 STDB2E<br />

I DHJOSE SJA2054I STDB2E<br />

---------+---------+---------+---------+---------<br />

Figure 27. SYSIBM.SYSUSERNAMES CDB Table. These are the userid translations we<br />

specified.<br />

---------+---------+---------+---------+<br />

AUTHID PLANNAME LUNAME MODENAME<br />

---------+---------+---------+---------+<br />

LUDB23 IBMRDB<br />

LUDB2B IBMRDB<br />

LUSQLDB2 IBMRDB<br />

---------+---------+---------+---------+<br />

Figure 28. SYSIBM.SYSMODESELECT CDB Table. These are the modes we specified.<br />

---------+---------+---------+---------+<br />

LUNAME MODENAME CONVLIMIT AUTO<br />

---------+---------+---------+---------+<br />

---------+---------+---------+---------+<br />

Figure 29. SYSIBM.SYSLUMODES CDB Table. No definitions are included in this table.<br />

We used defaults and did not want to change MODENAME, CONVLIMIT, and AUTO for any<br />

specific LUNAME.


3.4.3 Testing DRDA Connections<br />

3.5 VM Definitions<br />

3.5.1 SNA Definitions<br />

Application programs can be executed after they are bound to a target database.<br />

Before you install DataHub definitions we recommend that you verify all DRDA<br />

connections. Once you have verified the DRDA connections you can, for<br />

example, bind SPUFI, which is an interactive tool provided in the DB2<br />

environment, at a remote location. Because SPUFI can be bound to SQL/DS and<br />

AS/400 databases, you can query SQL/DS and AS/400 data from DB2.<br />

DB2 SPUFI to SQL/DS<br />

To make SPUFI available to the SQL/DS platform, you must bind the package<br />

DSNESM68 for the user IDs DSNESPCS and DSNESPRR at the SQL/DS site. You<br />

can do this from DB2 by using the BIND PACKAGE option in DB2I. Before doing<br />

the bind, you should check whether both plans include the remote SQL/DS<br />

location name. If they do not include it, you should rebind the plans to include<br />

the location name. Then you can use SPUFI to GRANT execute authority to<br />

public to these plans at the remote location.<br />

DB2 SPUFI to AS/400<br />

To make SPUFI available to the AS/400 platform, you must bind the package<br />

DSNESM68 in collections DSNESPCS and DSNESPRR at the AS/400 site. You<br />

can do this from DB2 using the BIND PACKAGE option in DB2I. Before doing the<br />

bind, you should check whether both plans include the remote AS/400 location<br />

name. If they do not include it, you should rebind the plans to include the<br />

location name. Then you can use SPUFI to GRANT execute authority to public to<br />

these plans at the remote location.<br />

In this section we show how the key connectivity parameters have been defined<br />

in the VM environment. We also explain the relationship between the different<br />

components.<br />

Because we are using in the ITSC network the same Communication Controller<br />

in VM and MVS, the SNA definitions used in the VM environment are the same<br />

as in the MVS environment. Refer to 3.4.1, “SNA Definitions” on page 40.<br />

Logical Unit Name<br />

In MVS we define DB2 as a VTAM application in the VTAM APPL statement. In<br />

VM we define the AVS application, which is related to SQL/DS (see Figure 30 on<br />

page 52).<br />

Chapter 3. DRDA Connectivity 51


3.5.2 DRDA Definitions<br />

52 DataHub Implementation and Connectivity<br />

**********************************************************************<br />

* AVS VTAM DEFINITION FOR CONNECTING SQL/DS *<br />

**********************************************************************<br />

AVSVM VBUILD TYPE=APPL<br />

*<br />

LUSQLVMA APPL APPC=YES, X<br />

AUTHEXIT=YES, X<br />

AUTOSES=10, X<br />

DLOGMOD=IBMRDB, X<br />

DSESLIM=100, X<br />

DMINWNL=50, X<br />

DMINWNR=50, X<br />

MODETAB=AGWTAB, X<br />

PARSESS=YES, X<br />

SECACPT=ALREADYV, X<br />

SYNCLVL=CONFIRM, X<br />

VERIFY=NONE, X<br />

VPACING=2<br />

**********************************************************************<br />

Figure 30. AVS LU Definition for SQL/DS Environment<br />

The parameter DLOGMOD=IBMRDB specifies the log mode entry for use from<br />

the log mode table AGWTAB. The parameter SECACPT=ALREADYV specifies<br />

the level of conversation security, which is set up as “already verified,” the<br />

recommended value.<br />

In this section we describe the key DRDA connectivity parameters in the SQL/DS<br />

environment.<br />

DRDA Components in SQL/DS: An Overview<br />

Figure 31 on page 53 shows the main DRDA components in the SQL/DS<br />

environment.


Figure 31. Main DRDA Components in an SQL/DS Environment<br />

APPC/VM Advanced Program-to-Program Communication/VM. An<br />

API for communication between two virtual machines that<br />

is mappable to the SNA LU 6.2 APPC interface.<br />

APPC/VTAM Advanced Program-to-Program Communication/VTAM. An<br />

API providing an LU 6.2 function set as defined by SNA for<br />

communication between application programs.<br />

AVS APPC/VM VTAM <strong>Support</strong>. A component of VM that enables<br />

application programs using APPC/VM to communicate with<br />

programs anywhere in an SNA network. AVS transforms<br />

APPC/VM into APPC/VTAM in outbound flow, and<br />

APPC/VTAM into APPC/VM in inbound flow.<br />

AVS Console Used to activate and deactivate gateways, monitor<br />

DataHub <strong>Support</strong>/VM gateways and conversations, and<br />

control GCS tracing.<br />

CP Control Program. A component of VM that manages the<br />

resources of a single computer so that multiple computing<br />

systems (virtual machines) appear to exist.<br />

GCS Group Control System. A component of VM that provides<br />

simulated MVS services and unique supervisor services to<br />

help support a native SNA network. VTAM and AVS run<br />

under GCS only under the control of CP.<br />

SQL/DS Application Requester (AR)<br />

A CMS machine that can connect to a DRDA application<br />

server to access a database.<br />

Chapter 3. DRDA Connectivity 53


SQL/DS <strong>Database</strong> Machine (AS)<br />

A CMS machine controlling access to an SQL/DS database<br />

in which the DRDA application server code runs.<br />

Application Server Identification<br />

An SQL/DS application server can be identified in several ways:<br />

<strong>Database</strong> Name This is the RDB name in DRDA terminology. It is specified,<br />

for example, on the CONNECT TO <br />

statement. The database name can be up to 18 characters<br />

long. Prior to SQL/DS V3.3, the database name could be<br />

only up to eight characters long.<br />

Resource-id This is the transaction program name in DRDA<br />

terminology. It is the name that VM uses to identify a<br />

database. This name can be up to eight characters long<br />

and is used as:<br />

• The CMS filename for database control files with a file<br />

type of SQLDBN, SQLFDEF, or SQLDBGEN. These<br />

database control files reside in the production CMS<br />

minidisk (Q-disk).<br />

• An externally identified name when TSAF and/or AVS<br />

are used to define the database as a GLOBAL<br />

RESOURCE.<br />

The resource-id is defaulted to be the same as the<br />

database name unless the database administrator<br />

explicitly requests two different names, or the database<br />

name is longer than eight characters. If the two names<br />

are different, the database name and the resource-id must<br />

be mapped together by means of the CMS RESID NAMES<br />

file.<br />

In Figure 32 the SQL/DS database name (DRDA RDB<br />

name) SQLLDLR01 is associated with the SQL/DS<br />

resource-id SQLVMA.<br />

RESID NAMES Q1 V 255 Trunc=255 Size=2 Line=0 Col=1 Alt=0<br />

====><br />

:nick. :resid.SQLVMA<br />

:dbname.SQLLDLR01<br />

Figure 32. CMS RESID NAMES File<br />

This association enables VM to translate a request that<br />

refers to the DRDA RDB name into a shorter resource-id<br />

that is used internally by SQL/DS for other purposes, for<br />

example, as the name of the SQLDBN CMS file (see<br />

Figure 33).<br />

SQLVMA SQLDBN Q1 V 80 Trunc=80 Size=1 Line=0 Col=1 Alt=0<br />

====><br />

DBMACHID=SQLMACH,DCSSID=SQLDBA,DBNAME=SQLLDLR01,AMODE=31<br />

Figure 33. SQLVMA SQLDBN File<br />

54 DataHub Implementation and Connectivity


Machine-id This is the CMS-id of the machine that is to be used as the<br />

server machine. As you can see in Figure 33 the<br />

resource-id is SQLVMA, the machine-id is SQLMACH, and<br />

the database name is SQLLDLR01.<br />

Communication between AVS and SQL/DS Machines: An Overview<br />

<strong>Database</strong> Machine Definition<br />

Figure 34 highlights the main parameters: SQLMACH is<br />

the name of the SQL/DS database machine, in the<br />

statement IUCV *IDENT RESANY GLOBAL; RESANY means<br />

this machine can access any database resource; GLOBAL<br />

means this database can be accessed from remote LUs.<br />

The statement IUCV ALLOW means that every CMS user<br />

can access this machine.<br />

USER SQLMACH NOTUSED 8M 17M G<br />

ACCOUNT ITS2000<br />

OPTION MAXCONN 26<br />

IUCV *IDENT RESANY GLOBAL<br />

IUCV ALLOW<br />

* VM/ESA Data spaces directory, next three statements.<br />

MACHINE XC<br />

XCONFIG ACCESSLIST ALSIZE 1022<br />

XCONFIG ADDRSPACE MAXNUMBER 1022 TOTSIZE 2044G<br />

OPTION MAINTCCW<br />

IPL CMS<br />

CONSOLE 009 3215 T OPERATOR<br />

SPOOL 00C 3505 *<br />

SPOOL 00D 3505 A<br />

SPOOL 00E 1403<br />

LINK MAINT 190 190 RR<br />

LINK MAINT 19D 19D RR<br />

LINK MAINT 19E 19E RR<br />

LINK MAINT 19F 19F RR<br />

* base machine disk<br />

MDISK 0191 3390 2446 11 VMXP1P MR DBA2PW DBA3PW<br />

* Service Minidisk<br />

MDISK 0193 3390 2672 55 VMXP1P MR DBA2PW DBA2PW<br />

* Production Minidisk<br />

MDISK 0195 3390 2457 22 VMXP1P MR ALL DBA2PW DBA2PW<br />

* Directory Disk<br />

MDISK 0200 3390 1970 33 VMXP1P MR DBA2PW DBA2PW<br />

* Log disk1<br />

MDISK 0201 3390 2003 9 VMXP1P MR DBA2PW DBA2PW<br />

* Data disk1<br />

MDISK 0202 3390 2012 65 VMXP1P MR DBA2PW DBA2PW<br />

* QMF.-------------------------------------------------<br />

* Production disk<br />

MDISK 0396 3390 2479 33 VMXP1P RR<br />

* Distribution disk<br />

MDISK 0397 3390 2512 19 VMXP1P RR<br />

* DATA Hub.--------------------------------------------<br />

MDISK 0197 3390 2578 15 VMXP1P RR<br />

MDISK 0198 3390 2643 20 VMXP1P RR<br />

Figure 34. <strong>Database</strong> Machine Definition<br />

Chapter 3. DRDA Connectivity 55


AVS Machine Definition<br />

In Figure 35 on page 56, the GATEWAY parameter of the<br />

statement IUCV *IDENT GATEANY ... means that the AVS<br />

can use any gateway defined in the AGWPROF GCS file as<br />

shown in Figure 36 on page 57.<br />

USER AVSVM BMESAPW 12M 17M BG 64<br />

ACCOUNT ITS3000<br />

MACHINE XA<br />

IUCV *IDENT GATEANY GATEWAY REVOKE<br />

IUCV ALLOW<br />

OPTION MAINTCCW COMSRV MAXCONN 20 VCUNOSHR REALTIMER ACCT BMX<br />

IPL GCS PARM AUTOLOG<br />

NAMESAVE GCS<br />

CONSOLE 01F 3215 T VMAINT<br />

SPOOL 00C 2540 READER *<br />

SPOOL 00D 2540 PUNCH A<br />

SPOOL 00E 1403 A<br />

LINK MAINT 190 190 RR<br />

LINK MAINT 19D 19D RR<br />

LINK VMAINT 595 595 RR<br />

LINK VMAINT 19E 19E RR<br />

LINK SQLMACH 198 198 RR<br />

MDISK 0191 3390 436 3 VMXP1P MR RAVSOBJ WAVSOBJ<br />

**********************************************************************<br />

Figure 35. AVS Machine Definition<br />

RDB Name<br />

The RDB name is the database name. In our example the RDB name is<br />

SQLLDLR01.<br />

DRDA Partners<br />

To connect VM to other DRDA platforms you need to provide definitions in:<br />

• AVS Profile—AGWPROF GCS<br />

• Communication directory—UCOMDIR NAMES.<br />

AVS Profile Definition<br />

56 DataHub Implementation and Connectivity<br />

Gateways between VM and other platforms are defined in the AGWPROF GCS<br />

file, which is invoked in the AVS machine initialization process (see Figure 36 on<br />

page 57).<br />

One global gateway should be defined for each AVS machine, for example:<br />

• ′AGW ACTIVATE GATEWAY LUSQLDB2 GLOBAL′<br />

• ′AGW CNOS LUSQLDB2 LUDB23 IBMRDB 10 5 5′.<br />

The AVS profile is also used to specify inbound translation. When remote<br />

application requesters are trying to get information from a VM application server,<br />

you can translate the remote user ID for a valid local user ID.<br />

Use one entry for each remote USER for which you want inbound translation, for<br />

example:<br />

• ′AGW ADD USERID LUDB23 STDB2B STSQLB


The STDB2B DB2 user on the LUDB23 gateway will be known as the STSQLB<br />

SQL/DS user.<br />

* * * * * *<br />

/* AGWPROF GCS which activates one gateway for global resource */<br />

/* communications and sets values for the session. */<br />

* * * * * *<br />

/* DataHub <strong>Support</strong> ************************************************ */<br />

′ FILEDEF VDHTOR01 DISK POOL CONFIG J′<br />

′ AGW ACTIVATE GATEWAY VDHTOR01 PRIVATE MANAGER EMQVCMG′<br />

′ AGW CNOS VDHTOR01 SJA2015I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2018I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2039I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2019I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2050I IBMRDB 10 5 5′<br />

/* */<br />

/* DRDA Gateways for AVS systems*********************************** */<br />

′ AGW ACTIVATE GATEWAY LUSQLVMA GLOBAL′<br />

′ AGW ACTIVATE GATEWAY LUSQLDB2 GLOBAL′<br />

/* Connection between LUSQLVMA and PS/2 Workstations ************** */<br />

′ AGW CNOS LUSQLVMA SJA2015I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2018I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2039I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2019I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2050I IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and AS/400 (SJA2054I)*************** */<br />

/* SJA2054I for other RDBMS: AS/400, LUSQLVMB,LUDB23,LUDB2B ******* */<br />

′ AGW CNOS LUSQLDB2 SJA2054I IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and a DB2 (LUDB23) ***************** */<br />

′ AGW CNOS LUSQLDB2 LUDB23 IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and a second DB2 (LUDB2B) ********** */<br />

′ AGW CNOS LUSQLDB2 LUDB2B IBMRDB 10 5 5′<br />

/* SQL/400 USERS ************************************************** */<br />

′ AGW ADD USERID SJA2054I CHAPUT =′<br />

′ AGW ADD USERID SJA2054I STSQLA =′<br />

′ AGW ADD USERID SJA2054I STSQLB =′<br />

′ AGW ADD USERID SJA2054I STSQLC =′<br />

′ AGW ADD USERID SJA2054I STSQLD =′<br />

′ AGW ADD USERID SJA2054I STSQLE =′<br />

/* DB2B USERS */<br />

′ AGW ADD USERID LUDB2B CHAPUT =′<br />

′ AGW ADD USERID LUDB2B STDB2A STSQLA′<br />

′ AGW ADD USERID LUDB2B STDB2B STSQLB′<br />

′ AGW ADD USERID LUDB2B STDB2C STSQLC′<br />

′ AGW ADD USERID LUDB2B STDB2D STSQLD′<br />

′ AGW ADD USERID LUDB2B STDB2E STSQLE′<br />

/* DB23 USERS **************************************************** */<br />

′ AGW ADD USERID LUDB23 CHAPUT =′<br />

′ AGW ADD USERID LUDB23 STDB2A STSQLA′<br />

′ AGW ADD USERID LUDB23 STDB2B STSQLB′<br />

′ AGW ADD USERID LUDB23 STDB2C STSQLC′<br />

′ AGW ADD USERID LUDB23 STDB2D STSQLD′<br />

′ AGW ADD USERID LUDB23 STDB2E STSQLE′<br />

Figure 36. AGWPROF GCS File<br />

Chapter 3. DRDA Connectivity 57


Communication Directory Definition<br />

A CMS application requester, to be able to connect to another RDBMS, needs to<br />

access a system communication directory (usually called SCOMDIR NAMES)<br />

and/or a user communication directory (usually called UCOMDIR NAMES). Each<br />

entry in a communication directory defines the remote RDBMS.<br />

For example, for the DB2 server named DB23 in Figure 37, we find:<br />

• Transaction program name ″6DB (default DB2 and AS/400 TPN)<br />

NOTE: The first character of the transaction program name (:tpn) is, in<br />

fact, x′07′.<br />

• LU names LUSQLDB2 (AVS LU name) and LUDB23 (DB2 LU name)<br />

• dbname DB2CENTDIST (RDB name).<br />

* * * * * * * * * * * *<br />

*UCOMDIR NAMES file on STSQLB requester Machine.<br />

*** * * * * * * * * * *<br />

/* Entry to define the path to access the DB2 server DB2CENTDIST */<br />

:nick.DB23<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 LUDB23<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.DB2CENTDIST<br />

/* Entry to define the path to access the DB2 server DB2REGNDIST01 */<br />

:nick.DB2B<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 LUDB2B<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.DB2REGNDIST01<br />

/* Entry to define the path to access the AS/400 server SJ400DLR1 */<br />

:nick.AS400<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 SJA2054I<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.SJ400DLR1<br />

/* Entry to define the path to access the Local SQL/DS server SQLLDLR01*/<br />

:nick.SQLLOCAL<br />

:tpn.SQLVMA<br />

:luname.*IDENT<br />

:dbname.SQLLDLR01<br />

Figure 37. UCOMDIR NAMES File<br />

58 DataHub Implementation and Connectivity<br />

Transaction Program Name<br />

The TPN in SQL/DS is synonymous with the resource-id.


3.5.3 Binding Applications<br />

Using the DBS Utility on a Non-SQL/DS Application Server<br />

Before a user can use the DBS utility on a non-SQL/DS application server, you<br />

must first preprocess the DBS utility package on a non-SQL/DS application<br />

server, and then create the table SQLDBS.DBSOPTIONS on the non-SQL/DS<br />

application server. You must obtain the necessary program bind and table<br />

creation privileges for your authorization-id on the target application server. Do<br />

the following from an SQL/DS application requester to preprocess the DBS<br />

utility:<br />

1. Run the SQLINIT EXEC to establish the non-SQL/DS application server as the<br />

default application server.<br />

2. Link to the database machine′s service disk:<br />

LINK machid 193 193 RR<br />

3. Enter the following to access the service disk:<br />

ACC 193 V<br />

4. To preprocess the DBS utility, enter the following:<br />

SQLPREP ASM PP (PREP=SQLDBA,ARIDSQL, BLOCK,ISOL(CS),<br />

NOPR,NOPU,CTOKEN(NO),ERROR) IN ARIDSQLP MACRO V)<br />

5. To create the table SQLDBA.DBSOPTIONS, enter the following DBS utility<br />

commands:<br />

SET ERRORMODE CONTINUE;<br />

CREATE TABLE SQLDBA.DBSOPTIONS (SQLOPTION VARCHAR(18) NOT<br />

NULL, VALUE VARCHAR(18) NOT NULL);<br />

CREATE UNIQUE INDEX SQLDBA.DBSINDEX ON SQLDBA.DBSOPTIONS<br />

(SQLOPTION,VALUE);<br />

INSERT INTO SQLDBA.DBSOPTIONS VALUES (′RELEASE′,′3.3.0′);<br />

COMMIT WORK;<br />

Using ISQL on a Non-SQL/DS Application Server<br />

Before a user can make an ISQL request against a non-SQL/DS application<br />

server, you must load the ISQL package on the non-SQL/DS application server.<br />

Note: The SQL/DS ISQL component cannot access AS/400 databases from<br />

SQL/DS; in this case use the DBS utility instead of ISQL.<br />

Before you can load ISQL on a non-SQL/DS application server, you must first<br />

preprocess the DBS utility on the non-SQL/DS application server. Then ensure<br />

that your authorization-id has the necessary program bind and table creation<br />

privileges on the non-SQL/DS application server. Do the following from an<br />

SQL/DS application requester to load ISQL:<br />

1. Run the SQLINIT EXEC to establish the non-SQL/DS application server as the<br />

default application server.<br />

2. Link to the database machine′s service disk:<br />

LINK machid 193 193 RR<br />

Chapter 3. DRDA Connectivity 59


3.6 AS/400 Definitions<br />

3. Enter the following to access the service disk:<br />

ACC 193 V<br />

60 DataHub Implementation and Connectivity<br />

4. Enter the following CMS command:<br />

FILEDEF ARIISQLM DISK ARRIISQLM MACRO V<br />

5. Enter the following DBS utility command to reload ISQL:<br />

RELOAD PACKAGE (SQLDBA.ARIISQL) REPLACE KEEP INFILE<br />

(ARIISQLM);<br />

6. Create the table SQLDBA.ROUTINE, and eventually any other<br />

userid.ROUTINE that you want. The following SQL statement illustrates the<br />

creation of a routine table:<br />

CREATE TABLE ROUTINE (NAME CHAR(8) NOT NULL, SEQNO INTEGER<br />

NOT NULL, COMMAND VARCHAR(254) NOT NULL, REMARKS<br />

VARCHAR(254) NOT NULL);<br />

Using an SQL/DS Application on a Non-SQL/DS Application Server<br />

Before an SQL/DS application can run on a non-SQL/DS application server, you<br />

must load the application package on a non-SQL/DS application server. You<br />

must obtain the necessary program bind and table access privileges for your<br />

authorization-id on the target application server. The following example shows<br />

the unload of the package from the SQL/DS side (SQLLDLR01) and the reload of<br />

the package on the DB2 side (DB2CENTDIST):<br />

1. The following DBS utility command unloads a package from SQL/DS<br />

(SQLLDLR01):<br />

UNLOAD PACKAGE (package-name) FROM (SQLLDLR01)<br />

OUTFILE (ddname);<br />

2. The following DBS utility command, issued in SQL/DS (SQLLDLR01), reloads<br />

a package on DB2 (DB2CENTDIST):<br />

RELOAD PACKAGE (package-name) REPLACE KEEP TO (DB2CENTDIST)<br />

INFILE (ddname);<br />

Configuring communications for a distributed relational database requires that<br />

the local and remote systems are defined in the network. A relational database<br />

directory associates communications configuration values with relational<br />

databases in the distributed relational database network.<br />

The AS/400 is an SNA compliant system. To allow the AS/400 to communicate<br />

with MVS or VM you must have VTAM definitions. To communicate with OS/2<br />

you need to configure the OS/2 Communications Manager. In our configuration<br />

we used the same AS/400 line description to connect to the other hosts (MVS,<br />

VM, and OS/2).<br />

The most important task is to identify which parameters on the different<br />

platforms must match.<br />

Figure 38 on page 61 shows the matching parameters that you must define to<br />

connect an AS/400 to a VTAM-based host. The AS/400 device descriptions are<br />

created automatically because the AS/400 controller description was created<br />

with the parameter APPN=*YES.


3.6.1 SNA Definitions<br />

Figure 38. AS/400 and VTAM: Matching Parameters<br />

If you are going to connect the AS/400 to a VTAM-based host (MVS or VM) we<br />

recommend that you first define the VTAM and NCP names in their respective<br />

macros and then define the AS/400 communications objects. Our AS/400 is<br />

connected in a token-ring network, and we show the VTAM and AS/400<br />

definitions for this connection.<br />

The OS/2 definitions required to connect to the AS/400 are given in “Partner<br />

Token-Ring Address and Logical Unit Name” on page 81.<br />

VTAM<br />

The proper VTAM macros must be coded in the host to which you will connect<br />

the AS/400. You will need a VTAM listing to check the values.<br />

You must define the VTAM PU, LU, PATH, and MODEENT macros to match the<br />

AS/400 definitions. The AS/400 communication objects required to connect an<br />

AS/400 to any other system are:<br />

• Line description<br />

• Controller description<br />

• Device description.<br />

You can find the AS/400 line description parameters using an AS/400 terminal,<br />

by entering the WRKCFGSTS *LIN (Work with Configuration Status) command.<br />

Chapter 3. DRDA Connectivity 61


62 DataHub Implementation and Connectivity<br />

Then press enter, find the line description you are using, select option 8, and<br />

press enter. Then select option 5 and press enter.<br />

AS/400 Objects and Equivalencies<br />

The AS/400 line description corresponds to the VTAM PATH macro. The<br />

equivalent information on the OS/2 host is found in the OS/2 Communications<br />

Manager LAPS.<br />

The AS/400 controller description is equivalent to the VTAM PU macro. The<br />

equivalent information on the OS/2 host is found in the OS/2 Communications<br />

Manager partner LU profile.<br />

The AS/400 device description is equivalent to the VTAM LU macro. The<br />

equivalent information on the OS/2 host is found in the OS/2 Communications<br />

Manager partner LU and LU profiles.<br />

The AS/400 mode description is equivalent to the VTAM MODEENT macro. The<br />

equivalent information on the OS/2 host is found in OS/2 Communications<br />

Manager Transmission Service Mode profile and initial Session Limits profile.<br />

Table 9 shows the corresponding communication objects in AS/400, VTAM, and<br />

OS/2 Communications Manager.<br />

Table 9. Communications Definitions<br />

AS/400 VTAM OS/2<br />

Line description PATH LAN adapter<br />

Controller PU Partner LU<br />

Device LU Partner LU and LU profile<br />

Mode description MODEENT Mode profile<br />

Figure 39 on page 63 shows the ITSC VTAM definition for the AS/400. You can<br />

get this information in the VTAM listing.


********************************************************************<br />

SWSJ004 VBUILD TYPE=SWNET, +<br />

MAXGRP=1, +<br />

MAXNO=1<br />

*<br />

SJA2054 PU ADDR=01, +<br />

IDBLK=056,IDNUM=A2054, +<br />

ANS=CONT,DISCNT=NO, +<br />

IRETRY=NO,ISTATUS=ACTIVE, +<br />

MAXDATA=4060,MAXOUT=7, +<br />

MAXPATH=5,PASSLIM=7, +<br />

PUTYPE=2,SECNET=NO,SSCPFM=USSSCS, +<br />

MODETAB=POKMODE,DLOGMOD=LU62APPA, +<br />

USSTAB=USSRDYN,LOGAPPL=SCGVAMP, +<br />

PACING=0,VPACING=2<br />

*<br />

PA2054 PATH DIALNO=400052047158, +<br />

USE=YES, +<br />

GID=1, +<br />

PID=1, +<br />

REDIAL=2<br />

*<br />

SJA2054I LU LOCADDR=000,DLOGMOD=LU62APPA<br />

SJA2054A LU LOCADDR=001,DLOGMOD=DTNNJE0,MODETAB=RSCSTAB<br />

SJA2054B LU LOCADDR=002,DLOGMOD=LU62APPA<br />

SJA2054C LU LOCADDR=003,DLOGMOD=LU62APPA<br />

SJA2054D LU LOCADDR=004,DLOGMOD=LU62APPA<br />

SJA2054E LU LOCADDR=005,DLOGMOD=LU62APPA<br />

SJA2054F LU LOCADDR=006,DLOGMOD=LU62APPA<br />

Figure 39. VTAM: AS/400 Definition<br />

AS/400<br />

The AS/400 system must be defined in the network so that each system can<br />

identify itself and the remote systems in the network. To define an AS/400<br />

system in a network you must:<br />

1. Change or define the network attributes.<br />

2. Create the line descriptions.<br />

3. Create a controller description.<br />

4. Create a class-of-service description.<br />

5. Create a mode description.<br />

6. Create a device description automatically or manually.<br />

7. Create or update the local QAPPNLCL and remote QAPPNRMT configuration<br />

list.<br />

Chapter 3. DRDA Connectivity 63


64 DataHub Implementation and Connectivity<br />

AS/400 Network Attributes Definition (LU Name)<br />

Use the OS/400 CHGNETA (Change Network Attributes) command to change the<br />

network attributes.You can press F4 (PROMPT) and type over the fields with the<br />

new definition.<br />

Figure 40 shows the network attributes of the ITSC AS/400.<br />

Local System Name : SJAS400A<br />

Local Network ID : USIBMSC<br />

Local Control Point Name : SJAS400A<br />

Default Local Location Name : SJA2054I<br />

Default Mode : IBMRDB<br />

APPN Node Type : *NETNODE<br />

Figure 40. ITSC AS/400 Network Attributes Definition<br />

In an AS/400 environment, DRDA can be implemented using token-ring, SDLC,<br />

Ethernet, ISDN, or X.25 communication protocols.<br />

Our AS/400 is connected in a token-ring LAN, connected to a 3745<br />

Communications Controller, and to other RDBMSs through a token-ring bridge.<br />

Token-Ring Line Description<br />

As our AS/400 is connected in a token-ring LAN, we used the command<br />

CRTLINTRN (Create Line Token Ring Network) to create our line description<br />

definition. See Figure 41 for details.<br />

CRTLINTRN LIND(TRLINE) RSRCNAME(LIN031) MAXFRAME(4060) +<br />

ADPTADR(400052047158) EXCHID(056A2054) +<br />

LINKSPEED(4M) + AUTOCRTCTL(*YES) +<br />

TEXT(′ ITSC DRDA Token Ring Line 4M′)<br />

Figure 41. AS/400: Token Ring Line Description Definition<br />

ADAPTADR: The ADAPTADR parameter in the AS/400 line description must<br />

match the DIALNO parameter in the VTAM PATH macro.<br />

EXCHID: The three first bytes of the EXCHID parameter in the AS/400 line<br />

description definition correspond to the IDBLK parameter in the VTAM PU<br />

macro. The other five bytes of the EXCHID parameter correspond to the IDNUM<br />

parameter in the same VTAM PU macro.<br />

Partner Token-Ring Address (Creating a Controller Description)<br />

If the AUTOCRTCTL parameter on a token-ring or Ethernet line description is set<br />

to *YES, the controller description and device description are automatically<br />

created when the system receives a session start request over the line.<br />

The VTAM SSCPNAME value must match the AS/400 controller description<br />

RMTCPNAME parameter.<br />

Optionally it is possible to define the controller using the AS/400 commands<br />

CRTCTLHOST or CRTCTLAPPC (Create Controller Description Host or APPC). See<br />

Figure 42 on page 65.


CRTCTLHOST CTLD(TRCTLHOST) LINKTYPE(*LAN) ONLINE(*YES) +<br />

APPN(*YES) SWTLINLST(TRLINE) +<br />

MAXFRAME(4060) RMTNETID(USIBMSC) +<br />

RMTCPNAME(SCG20) INLCNN(*DIAL) +<br />

ADPTADR(400008210200) +<br />

TEXT(′ Controler to VTAM SSCPNAME SCG20)<br />

Figure 42. AS/400: Controller Description Definition<br />

Partner LU Name (Creating a Device Description)<br />

The device description describes the characteristics of the logical connection<br />

between the AS/400 and remote systems. The device description is created<br />

automatically or manually depending on the APPN parameter on the CRTCTLAPPC<br />

or CRTCTLHOSTcommand.<br />

If the parameter APPN = *YES is specified, the device description is<br />

automatically created and attached to the appropriate controller description<br />

when the session is established.<br />

If the parameter APPN = *NO is specified, the device description must also<br />

specify APPN = *NO to indicate the device is not used in an APPN network. In<br />

this case the device description must be created manually.<br />

The device description is equivalent to the VTAM LU macro.<br />

To manually create a device description, you must use the CRTDEVAPPC (Create<br />

Device Description APPC) command.<br />

Figure 43 shows the device description of the AS/400 DRDA connection to DB2.<br />

This device was created automatically when the first connection was established.<br />

We present it here to point out the AS/400 definitions that must match the VTAM<br />

definitions. If you want to create the device description manually, you can use<br />

this definition as an example.<br />

CRTDEVAPPC DEVD(LUDB23) LOCADR(00) RMTLOCNAME(LUDB23) +<br />

ONLINE(*NO) + LCLLOCNAME(SJA2054I) +<br />

RMTNETID(*NETATR) + MODE(*NETATR) + MSGQ(*LIBL/QSYSOPR) +<br />

APPN(*YES) SNGSSN(*NO) + TEXT(′ AUTOMATICALLY CREATED BY QLUS′)<br />

Figure 43. AS/400: Device Description Definition<br />

Class of Service<br />

A class of service is used to select the communication routes (transmission<br />

groups) and assign transmission priority for sessions using APPN. The OS/400<br />

system supplies five classes of service:<br />

• #CONNECT<br />

• #BATCH<br />

• #BATCHSC<br />

• #INTER<br />

• #INTERSC.<br />

Chapter 3. DRDA Connectivity 65


Other classes of service can be created using the CRTCOSD (Create<br />

Class-of-Service Description) command.<br />

Mode Description Definition<br />

The mode description provides the session characteristics and number of<br />

sessions that are used to negotiate the allowed values between the local and the<br />

remote location. The mode description also points to the class of service to use<br />

for the conversation.<br />

The AS/400 mode description MAXLENRU and PACING parameters are equivalent<br />

to the VTAM RU size and pacing definitions.<br />

Five mode descriptions are shipped with the OS/400 system:<br />

• BLANK (the default mode name specified in the network attributes when<br />

system is shipped)<br />

• #BATCH<br />

• #BATCHSC<br />

• #INTER<br />

• #INTERSC.<br />

66 DataHub Implementation and Connectivity<br />

Other mode descriptions can be created using the CRTMODD command.<br />

AS/400 APPN Location List Configuration<br />

APPN locations lists are used to define the names of local locations and to<br />

describe special characteristics of remote locations. APPN location lists are<br />

used only by APPN configurations.<br />

APPN Local Location List: The local location list defines the names of the<br />

locations that are defined on the local system. Only one local location list<br />

resides on the system. Each AS/400 in a network has one local network<br />

identifier and a control point name.<br />

APPN Remote Locations List: Only one APPN remote location list resides on<br />

the system. In the ITSC DRDA network, because all hosts belong to a DRDA<br />

configuration, and we want to go from any AR to any AS, the DRDA conversation<br />

uses the LU 6.2 protocol. Because of DRDA, all remote locations must be<br />

defined to the AS/400 in the APPN remote locations list.<br />

Defining the Configuration List: The WRKCFGL (Work with Configuration List) or<br />

ADDCFGLE (Add Configuration List Entry) commands can be used to define the<br />

APPN configuration list.<br />

The AS/400 Communications Configuration Reference and the AS/400 APPN<br />

Guide contain more information about configuring for networking support and<br />

working with location lists.<br />

Figure 44 on page 67 shows our AS/400 remote configuration list definition.


3.6.2 DRDA Definitions<br />

Configuration list . . . . . . . . : QAPPNRMT<br />

Configuration list type . . . . . : *APPNRMT<br />

Text . . . . . . . . . . . . . . . :<br />

-----------------APPN Remote Locations------------------<br />

Remote Remote Control<br />

Remote Network Local Control Point Secure<br />

Location ID Location Point Net ID Loc<br />

LUDB23 USIBMSC SJA2054I SCG20 USIBMSC *YES<br />

LUSQLVMA USIBMSC SJA2054I SCG20 USIBMSC *YES<br />

Figure 44. ITSC AS/400 Configuration List Definition<br />

To connect an AS/400 in a DRDA environment, you need to check the AS/400<br />

coded character set identifier (CCSID) and define the AS/400 RDB directory<br />

entry.<br />

AS/400 CCSID<br />

DataHub does not support a CCSID of 65535. Therefore the user profile CCSID of<br />

the DataHub user cannot be 65535, and the CCSID of the database managed by<br />

DataHub cannot be 65535.<br />

In the AS/400 the CCSID is specified at:<br />

• System level, with the system value parameter QCCSID<br />

• User profile level, with the CCSID parameter in the AS/400 user profile.<br />

The system value QCCSID parameter at the AS/400 used in our network is<br />

defined as 37 (US English), and the same value is specified at DB2, SQL/DS, and<br />

OS/2 <strong>Database</strong> Manager.<br />

AS/400 RDB Directory<br />

In order for an SAA RDB to access the AS/400 database through DRDA it is<br />

necessary to create an entry in the AS/400 relational database directory. To<br />

create the entry execute the ADDRDBDIRE (Add Relational <strong>Database</strong> Directory<br />

Entry) or WRKRDBDIRE (Work with Relational <strong>Database</strong> Directory) command.<br />

RDB Name: The AS/400 database is recognized by other RDBMSs through a<br />

*LOCAL entry in the AS/400 RDB directory. In the ITSC AS/400 environment, the<br />

AS/400 local database name is SJ400DLR1:<br />

ADDRDBDIRE RDB(SJ400DLR1) RMTLOCNAME(*LOCAL)<br />

Remote RDB Name: The same command is used to add the remote RDB name<br />

to the AS/400 RDB directory. You simply repeat the command to add all the<br />

remote RDBs:<br />

ADDRDBDIRE RDB(DB2CENTDIST) RMTLOCNAME(LUDB23) DEV(*LOC) +<br />

LCLLOCNAME(SJA2054I) RMTNETID(USIBMSC) +<br />

MODE(IBMRDB) TNSPGM(*DRDA)<br />

If you are connecting the AS/400 to an SQL/DS RDBMS, the AS/400 directory<br />

entry parameter TNSPGM must match the SQL/DS TPNAME.<br />

Chapter 3. DRDA Connectivity 67


3.6.3 Testing DRDA Connections<br />

68 DataHub Implementation and Connectivity<br />

After you have successfully completed all network definitions you must use the<br />

WRKRDBDIRE or the ADDRDBDIRE command to define the *LOCAL OS/400<br />

database and all the application servers that you want to access from the<br />

AS/400. You must include an entry for each application server that the AS/400<br />

will access.<br />

Figure 45 shows the AS/400 relational database directory definition.<br />

OS/400 Local <strong>Database</strong> Display Entry<br />

Relational database . . . . . . . . . . . . . : SJ400DLR1<br />

Remote location:<br />

Remote location . . . . . . . . . . . . . . : *LOCAL<br />

Device description . . . . . . . . . . . . . :<br />

Local location . . . . . . . . . . . . . . . :<br />

Remote network identifier . . . . . . . . . :<br />

Mode . . . . . . . . . . . . . . . . . . . . :<br />

Transaction program . . . . . . . . . . . . . :<br />

Text . . . . . . . . . . . . . . . . . . . . . : OS/400 Local Databa<br />

DB2 Host Remote <strong>Database</strong> Display Entry<br />

Relational database . . . . . . . . . . . . . : DB2CENTDIST<br />

Remote location:<br />

Remote location . . . . . . . . . . . . . . : LUDB23<br />

Device description . . . . . . . . . . . . . : *LOC<br />

Local location . . . . . . . . . . . . . . . : SJA2054I<br />

Remote network identifier . . . . . . . . . : USIBMSC<br />

Mode . . . . . . . . . . . . . . . . . . . . : IBMRDB<br />

Transaction program . . . . . . . . . . . . . : *DRDA<br />

Text . . . . . . . . . . . . . . . . . . . . . : DB2 Server<br />

SQL/DS Host Remote <strong>Database</strong> Display Entry<br />

Relational database . . . . . . . . . . . . . : SQLLDLR01<br />

Remote location:<br />

Remote location . . . . . . . . . . . . . . : LUSQLVMA<br />

Device description . . . . . . . . . . . . . : *LOC<br />

Local location . . . . . . . . . . . . . . . : SJA2054I<br />

Remote network identifier . . . . . . . . . : USIBMSC<br />

Mode . . . . . . . . . . . . . . . . . . . . : IBMRDB<br />

Transaction program . . . . . . . . . . . . . : SQLVMA<br />

Text . . . . . . . . . . . . . . . . . . . . . : SQL/DS Server<br />

Figure 45. AS/400: Local and Remote Relational <strong>Database</strong> Directory<br />

After all SNA and DRDA directory definitions are complete, you need to test the<br />

connection through the DRDA network. To test the connection from the AS/400,<br />

you can use the STRSQL (Start SQL) command. Within an SQL interactive<br />

session, you can use the CONNECT TO RDBNAME command where RDBNAME is the<br />

same name that is specified in the AS/400 RDB directory, that is, the RDB name.<br />

If the connection from the AS/400 to the remote database is working properly,<br />

you will receive a message specifying that you are connected to a remote


3.6.4 Binding Applications<br />

3.7 OS/2 Definitions<br />

3.7.1 OS/2 Host Components<br />

database. Otherwise an error message will be issued at your AS/400 SQL<br />

interactive screen.<br />

After a successful connection from the AS/400 to a remote database, we<br />

recommend that you execute an SQL select statement against the remote<br />

database. If you receive an error message, you should verify the AS/400<br />

message, the job log, and message queue QSYSOPR, and use the appropriate<br />

approach on the remote system to determine what went wrong.<br />

It is not necessary to bind the AS/400 interactive SQL to the SQL/DS or DB2<br />

RDBMSs. The bind is automatically done at the first successful connection. All<br />

AS/400 users will share the package created at the remote locations.<br />

In the sections that follow, we explain the relationship between the OS/2<br />

components, review the general steps required to customize your OS/2<br />

environment for DRDA connectivity, and show how to customize the SNA, DRDA,<br />

and RDS key connectivity parameters. Also provided are screen captures of the<br />

most important windows you will use to customize your OS/2 environment. This<br />

document does not show how to install the different components. You must refer<br />

to the component installation documents.<br />

Outside the OS/2 base operating system, the components used on an OS/2<br />

DRDA host are:<br />

• The LAN Adapter and Protocol <strong>Support</strong> (LAPS)<br />

The LAPS program is provided in different software:<br />

− Extended Services for OS/2<br />

− LAN Server/Requester (any version)<br />

− NTS/2.<br />

The LAPS program deals with token-ring network customization and support.<br />

It also provides the NETBIOS protocol that could be used to connect clients<br />

to an OS/2 <strong>Database</strong> Manager server or to interconnect OS/2 <strong>Database</strong><br />

Manager servers to one another (for example, connect the DataHub/2<br />

workstation to an OS/2 host).<br />

If you are using a physical connection other than the token-ring (for example,<br />

SDLC), the OS/2 Communications Manager provides you with the necessary<br />

support.<br />

• OS/2 Communications Manager<br />

Note: OS/2 Communications Manager is not the name of a product. When<br />

we use the term OS/2 Communications Manager, we are really talking about<br />

the communication manager component of Extended Services for OS/2 or the<br />

Communications Manager/2 (CM/2) product. All the screen captures in this<br />

chapter are screen captures of CM/2.<br />

The OS/2 Communications Manager provides you with the LU 6.2 support<br />

that is required for connecting the OS/2 host to the different DRDA hosts<br />

(MVS, VM, and AS/400). The LU 6.2 protocol is also used for the DataHub<br />

Chapter 3. DRDA Connectivity 69


tools conversation. It could also be used instead of the NETBIOS protocol to<br />

connect OS/2 <strong>Database</strong> Manager clients to the database server.<br />

• OS/2 <strong>Database</strong> Manager<br />

OS/2 <strong>Database</strong> Manager provides you with a relational database interface<br />

that is used to access either local or remote databases. You will need to<br />

customize the different OS/2 <strong>Database</strong> Manager directories to enable the<br />

different connections. Two protocols are used in a DataHub environment:<br />

− DRDA for interconnection between the MVS, VM, and AS/400 hosts. The<br />

DRDA protocol is also used to connect OS/2 <strong>Database</strong> Manager servers<br />

to these three hosts.<br />

− RDS is used to interconnect OS/2 <strong>Database</strong> Manager clients and servers.<br />

Note: OS/2 <strong>Database</strong> Manager is not the name of a product. When we use<br />

the term OS/2 <strong>Database</strong> Manager, we are really talking about the database<br />

manager component of Extended Services for OS/2 or the DB2/2 product.<br />

Because the directory tool interface has not changed, the examples we<br />

provide are applicable to both products.<br />

• Distributed <strong>Database</strong> Connection Services/2<br />

DDCS/2 enables the DRDA architecture on the OS/2 <strong>Database</strong> Manager<br />

platform. It is not required if you do not have any MVS, VM, or AS/400 hosts<br />

in your organization. This product enables an additional directory, the DCS<br />

Directory, which is customized through the OS/2 <strong>Database</strong> Manager<br />

Directory Tool.<br />

3.7.2 Relationship between OS/2 Components<br />

As you will have noticed when reading the previous section, many components<br />

are used on the OS/2 platform. Many of the values used to customize one<br />

component are used by other components. Figure 46 on page 71 will help you<br />

understand the relationships between the different values used. Problem<br />

determination will become very easy if you understand those relationships.<br />

The diagram on Figure 46 on page 71 shows the flow of three different OS/2<br />

<strong>Database</strong> Manager requests:<br />

• Local connect request<br />

70 DataHub Implementation and Connectivity<br />

• Inbound connect request, as it would be received by an OS/2 <strong>Database</strong><br />

Manager server workstation or a DDCS/2 gateway<br />

• Outbound connect request, as it would be sent by an OS/2 <strong>Database</strong><br />

Manager client to a server or a DDCS/2 gateway or from a DDCS/2 gateway<br />

to a DRDA server.<br />

Figure 46 on page 71 does not show the flow for a NETBIOS type of connection<br />

used between an OS/2 <strong>Database</strong> Manager client and an OS/2 <strong>Database</strong> Manager<br />

server or between servers. We do not show this flow because the NETBIOS flow<br />

does not involve OS/2 Communications Manager. It involves only the OS/2<br />

<strong>Database</strong> Manager component.


Figure 46. OS/2 Component Definition Relationships<br />

Local Request Flow<br />

The local request (bottom part of the diagram), which could have been entered<br />

on an OS/2 command line or through the OS/2 <strong>Database</strong> Manager API, comes<br />

into OS/2 <strong>Database</strong> Manager. First OS/2 <strong>Database</strong> Manager looks in the system<br />

directory for an alias with the name of MNYCENT. From that entry, OS/2<br />

<strong>Database</strong> Manager determines whether that database is local or remote and the<br />

database name. In the example, the MNYCENT database is remote, it could be<br />

accessed through the WNYCENT workstation, and the database name is<br />

DCSMNYC.<br />

Then OS/2 <strong>Database</strong> Manager looks for an entry with the database name<br />

DCSMNYC in the DCS directory. Because this directory entry exists, this OS/2<br />

workstation is used as a DDCS/2 gateway. If OS/2 <strong>Database</strong> Manager finds<br />

DCSMNYC, it can determine the RDB name to which it will connect and the<br />

name of the program that will handle the request on the remote site. Because<br />

Chapter 3. DRDA Connectivity 71


72 DataHub Implementation and Connectivity<br />

OS/2 <strong>Database</strong> Manager has found the entry, it will use the DRDA protocol to<br />

exchange information with the remote RDBMS.<br />

On the other hand, using the WNYCENT workstation name entry, OS/2 <strong>Database</strong><br />

Manager can figure out the partner LU, the mode, and the type of protocol to use<br />

to get to the partner RDBMS. Because the partner is a DRDA server (DB2), it is<br />

mandatory to use the APPN protocol.<br />

Note: If the NETBIOS protocol were used, then instead of entering the SNA<br />

definitions, the remote OS/2 <strong>Database</strong> Manager workstation name would have<br />

been used.<br />

Once all information has been gathered using the OS/2 <strong>Database</strong> Manager<br />

directory entries, OS/2 <strong>Database</strong> Manager passes that information to OS/2<br />

Communications Manager to establish the session with the partner RDBMS. The<br />

outbound flow is explained below under “Outbound Request Flow.”<br />

Inbound Request Flow<br />

When a OS/2 <strong>Database</strong> Manager client or another OS/2 <strong>Database</strong> Manager<br />

server wants to connect to a OS/2 <strong>Database</strong> Manager server or a DCCS/2<br />

gateway, two communication protocols could be used: NETBIOS and LU 6.2<br />

(APPN). NETBIOS uses the OS/2 <strong>Database</strong> Manager workstation name, which is<br />

defined at installation time, to establish the connection. This relationship is not<br />

shown on Figure 46 on page 71. When the inbound request uses the LU 6.2<br />

protocol, an LU to LU session will be negotiated and established between the<br />

remote LU (client) and the local LU (server). The local node characteristics<br />

parameters must be used by the client in order to connect to the server. The<br />

mode name entry is used for the session negotiation between the two LUs.<br />

Once the session has been established between the client and the server, SQL<br />

statements can be received and executed at the server and responses<br />

transmitted to the client. The first SQL statement to be passed by the client is<br />

the Connect (that is, START USING DATABASE). From there on, OS/2 <strong>Database</strong><br />

Manager uses the same flow as the local flow to determine which database it<br />

will use.<br />

Outbound Request Flow<br />

When OS/2 <strong>Database</strong> Manager wants to establish a session with a remote<br />

partner using the LU 6.2 protocol, it passes the partner LU name, the transaction<br />

program name, and the mode name to OS/2 Communications Manager, which in<br />

turn performs the session allocation. OS/2 <strong>Database</strong> Manager does not need to<br />

know where the partner LU is located. This is the OS/2 Communications<br />

Manager′s job. Through its own table, OS/2 Communications Manager will<br />

determine:<br />

• At which physical address this partner LU is located<br />

• Which control point controls the sessions<br />

• Which protocol to use to connect (for example, token-ring, SDLC, and the<br />

like).<br />

Once the session has been established between DBM and the remote RDBMS,<br />

the SQL traffic will start to flow. When the remote RDBMS is a DRDA server, the<br />

flow is governed by the DRDA flows and protocols.


3.7.3 General Customization Plan<br />

The following plan gives a general idea of the different steps required to<br />

implement DRDA or RDS on the OS/2 platform:<br />

LAPS<br />

3.7.4 SNA Definitions<br />

• Customize the LAPS (physical connection).<br />

OS/2 Communications Manager<br />

• Define your SNA LU (local node characteristics) in OS/2 Communications<br />

Manager<br />

• Define the different partner LU connections using OS/2 Communications<br />

Manager connection menus<br />

• Define the modes you will be using for the different connections in OS/2<br />

Communications Manager.<br />

OS/2 <strong>Database</strong> Manager<br />

• Define the OS/2 <strong>Database</strong> Manager server workstation name. This name is<br />

used to connect clients or other servers to the OS/2 <strong>Database</strong> Manager<br />

server if the protocol used is NETBIOS.<br />

• Catalog the OS/2 <strong>Database</strong> Manager Workstation entries. They will need to<br />

refer to either the Partner LU definitions made in OS/2 Communications<br />

Manager or other OS/2 <strong>Database</strong> Manager workstation names to which you<br />

want to connect.<br />

• Catalog the OS/2 <strong>Database</strong> Manager directory entries for all remote<br />

databases. You will need to refer to the workstation entries made<br />

previously.<br />

• Catalog the OS/2 <strong>Database</strong> Manager database connection services entries<br />

for the DRDA connection.<br />

If your OS/2 host system is connected to an MVS or VM host or if you want to<br />

use DataHub to manage RDBMSs that are located on one of those platforms,<br />

your OS/2 workstation needs to be defined in VTAM.<br />

OS/2 VTAM Definitions<br />

Figure 47 on page 74 shows you the exact VTAM definitions we used for the<br />

DataHub/2 workstation. In fact, all of our OS/2 systems are defined the same,<br />

except for the LU names and the IDNUM.<br />

Chapter 3. DRDA Connectivity 73


74 DataHub Implementation and Connectivity<br />

********************************************************************<br />

SWSJ001 VBUILD TYPE=SWNET, +<br />

MAXGRP=1, +<br />

MAXNO=1<br />

*<br />

SJA2039 PU ADDR=01, +<br />

IDBLK=05D,IDNUM=A2039, +<br />

ANS=CONT,DISCNT=NO, +<br />

IRETRY=NO,ISTATUS=ACTIVE, +<br />

MAXDATA=265,MAXOUT=1, +<br />

MAXPATH=1, +<br />

PUTYPE=2,SECNET=NO, +<br />

MODETAB=POKMODE,DLOGMOD=DYNRMT, +<br />

USSTAB=USSRDYN,LOGAPPL=SCGVAMP, +<br />

PACING=1,VPACING=2<br />

*<br />

SJA2039A LU LOCADDR=002<br />

SJA2039B LU LOCADDR=003<br />

SJA2039C LU LOCADDR=004<br />

SJA2039D LU LOCADDR=005<br />

SJA2039I LU LOCADDR=0,DLOGMOD=IBMRDB<br />

SJA2039J LU LOCADDR=0,DLOGMOD=IBMRDB<br />

SJA2039K LU LOCADDR=0,DLOGMOD=IBMRDB<br />

SJA2039L LU LOCADDR=0,DLOGMOD=IBMRDB<br />

*<br />

Figure 47. OS/2 Host Definition. This definition is used for the DataHub/2 workstation.<br />

Figure 48 on page 75 shows the relationships that you must consider when you<br />

customize OS/2 Communications Manager. If you do not understand the content<br />

of the menus, we suggest you click on the Help push button or press the F1 key.<br />

You could also refer to Figure 48 on page 75.


Figure 48. OS/2 Communications Manager Relationships with VTAM Definitions<br />

Token-Ring Address<br />

The OS/2 SNA connection over a token-ring network requires customization of<br />

both the LAPS program and the OS/2 Communications Manager.<br />

If your workstation is already attached to a token-ring network, this<br />

customization is not necessary. However, you should take note of some<br />

important performance parameter values, such as:<br />

• Number of adapter receive buffers<br />

• Receive buffer size<br />

• Number of adapter transmit buffers<br />

• Transmit buffer size.<br />

The customization of the token-ring connection, which we use for our network,<br />

requires the use of the LAPS program.<br />

Note: For the screen captures that follow, we used the NTS/2 software.<br />

To customize the LAN adapter, here are the steps involved using the NTS/2<br />

software:<br />

1. Start an OS/2 session.<br />

2. Execute the command C:\IBMCOM\LAPS.<br />

You will get the LAPS main window.<br />

Note: Not all the windows have been captured. Only the most important<br />

windows are shown in this document.<br />

Chapter 3. DRDA Connectivity 75


3. Click on Configure....<br />

The Configuration window will appear.<br />

4. Select the Configure LAN Transport radio button and click on Continue....<br />

The Configure Workstation window will appear. As shown in Figure 49, your<br />

actual configuration will be displayed in the Current Configuration part of the<br />

menu. You should have the IBM Token-Ring Network Adapters selected and<br />

the two protocols associated with the Token-Ring Network Adapter:<br />

• IBM IEEE 802.2, used by LU 6.2<br />

Figure 49. LAPS: Configure Workstation Window<br />

• IBM OS/2 NETBIOS, used by RDS and LAN Requester in our<br />

configuration.<br />

5. Select Token-Ring Network Adapters.<br />

6. Click on Edit.<br />

76 DataHub Implementation and Connectivity<br />

The Parameters for IBM Token-Ring Network Adapters window will appear<br />

as shown in Figure 50 on page 77. If you are using locally administered<br />

addresses, enter the network adapter address that has been given to you by<br />

your network personnel; otherwise leave the field blank.


Figure 50. LAPS: Parameters for IBM Token-Ring Network Adapters Window<br />

7. Click on OK when you are done.<br />

The Configure Workstation window (Figure 49 on page 76) will then<br />

reappear.<br />

8. You could edit the two protocol definitions in the Current Configuration box<br />

by selecting either protocol (that is, IBM IEEE 802.2 or IBM OS/2 NETBIOS),<br />

clicking on Edit and then clicking on OK when you are done.<br />

9. When the LAPS configuration is finished, click on OK on the Configure<br />

Workstation window.<br />

10. Click on Exit on the LAPS Main Window.<br />

The CONFIG.SYS Update window will appear.<br />

11. If you updated your configuration, you should click on Continue... to update<br />

your CONFIG.SYS file. If you did not update the configuration, click on Exit.<br />

To activate the changes, you will need to shut down your workstation. Once the<br />

system is operational again, you could verify the content of the LANTRAN.LOG in<br />

the IBMCOM directory. This log file contains the status of the token-ring card. It<br />

includes the token-ring address of the adapter.<br />

IDBLK, IDNUM, Network ID, Logical Unit Name, SNA Control Point<br />

Name<br />

Now that your workstation is connected to the network, you need to define the<br />

different communication parameters using the OS/2 Communications Manager<br />

interface. The IDBLK, IDNUM, Network ID, Logical Unit Name, and SNA Control<br />

Point Name are defined on the Local Node Characteristics window as shown in<br />

Figure 54 on page 80. But before explaining that, let′s discuss how to get there.<br />

1. If you use Communications Manager/2 as we did, you start the<br />

Communications Manager Setup application from the Communications<br />

Chapter 3. DRDA Connectivity 77


Manager/2 folder. For the Extended Services for OS/2 environment, you start<br />

the SNA Network Definitions Configuration from the Communications<br />

Manager folder.<br />

2. Select the OS/2 Communications Manager configuration file you want to<br />

customize.<br />

The Communications Manager Configuration Definition window will appear.<br />

3. Select Token-ring or other LAN types in the Workstation Connection Type<br />

window.<br />

The list of available features will change in the Feature or Application<br />

window.<br />

4. Select CPI Communications from the Features or Application window.<br />

You will end up with the screen shown in Figure 51.<br />

Figure 51. CM/2: Communications Manager Configuration Definition Window<br />

5. Click on Configure....<br />

78 DataHub Implementation and Connectivity<br />

The Communications Manager Profile List Sheet window, as shown in<br />

Figure 52 on page 79, will appear. In Extended Services for OS/2 this is<br />

where you will end up once you have entered the name of the configuration<br />

file.<br />

From this window, you will be able to define all the key connectivity<br />

parameters that are used for both the DRDA network and the DataHub tools<br />

conversation protocol.


Figure 52. CM/2: Communications Manager Profile List Sheet Window<br />

6. Select DLC - Token-ring or other LAN types and click on Configure....<br />

The Token-Ring or Other LAN Types DLC Adapter Parameters window will<br />

appear as shown in Figure 53. This window is used to customize the LU 6.2<br />

usage of the token-ring adapter. The following parameters have an impact<br />

on capacity and performance:<br />

• Send window count<br />

• Receive window count<br />

• Maximum I-field size.<br />

For any performance-related parameter, you should use the default value<br />

provided.<br />

Figure 53. CM/2: Token-Ring or Other LAN Types DLC Adapter Parameters Window<br />

7. Once the setup is complete, click on OK.<br />

Chapter 3. DRDA Connectivity 79


You are then back to the Communications Manager Profile List Sheet<br />

window (Figure 52).<br />

8. Select the SNA local node characteristics line and click on Configure....<br />

The Local Node Characteristics window (Figure 54) will appear. On this<br />

window, you will define your OS/2 workstation as it is known to the network.<br />

If you have MVS or VM hosts, you will need to refer to the VTAM definition to<br />

fill in the appropriate information (See Figure 47 on page 74).<br />

The Network ID information must be provided by your network personnel.<br />

The Local node name is the LU name (label of the LU macro) of one of the<br />

LUs that are defined with LOCADDR=0. In the VTAM definition that is<br />

provided, four independent LUs are defined for the PU. The LU SJA2039I is<br />

an independent LU because it is defined with LOCADDR=0 and the PU is<br />

defined as a PU type 2 (PUTYPE=2). The Local node ID (hex) values are the<br />

IDBLK and IDNUM parameter values in the OS/2 workstation VTAM<br />

definition.<br />

Figure 54. CM/2: Local Node Characteristics Window<br />

Table 10 shows you the local node definitions we used for our OS/2 system.<br />

Table 10. OS/2 Local Node Definitions for the ITSC Network<br />

OS/2 System Network ID Local Node<br />

Name<br />

DH/2 Control<br />

Point<br />

80 DataHub Implementation and Connectivity<br />

Local Node<br />

ID<br />

9. Once you have defined your local node characteristics, click on OK.<br />

Local Node<br />

Alias<br />

USIBMSC SJA2039I 05D A2039 ASJA2039<br />

Montreal USIBMSC SJA2018I 05D A2018 ASJA2018<br />

Paris USIBMSC SJA2050I 05D A2050 ASJA2050<br />

Rio De Janeiro USIBMSC SJA2019I 05D A2019 ASJA2019


The Communications Manager Profile List Sheet window (Figure 52 on<br />

page 79) will then appear.<br />

Partner Token-Ring Address and Logical Unit Name<br />

Now that your workstation is defined to the network, you need to define the<br />

connection to the MVS host and the partner LUs associated with it and with<br />

which you will exchange information.<br />

The steps that follow start at the Communications Manager Profile List Sheet<br />

window of Communications Manager/2 (Figure 52 on page 79). For Extended<br />

Services for OS/2, partner definition begins right after you give the name of the<br />

configuration file to the SNA Network Definitions Configuration program.<br />

If your PLU resides on a network different from yours, you will have to go into<br />

both the SNA connections dialog to define the PLU and the SNA features dialog<br />

to change the PLU network identification. If your PLU resides on the same<br />

network as yours, you need only to go into the SNA connections dialog to define<br />

the PLU. The following example, which is the ITSC case, assumes only one<br />

network.<br />

1. Select the SNA connections line from the window and click on Configure....<br />

The Connections List window will appear. As you can see in Figure 55 on<br />

page 82, there are radio buttons for the type of partner you can define.<br />

Based on the partner type you select, a list of links will appear. In Figure 55<br />

on page 82, we have only one link defined for a host type connection. The<br />

host on that window represents VTAM/NCP hosts. Under the peer node type,<br />

you could find the OS/2 and AS/400 systems.<br />

Network node type partners are special nodes that manage nodes in an<br />

APPN network. An APPN network is made up of network node systems and<br />

end node systems. One end node system would need to know only one<br />

network node in order to connect to every end node in the APPN network. A<br />

new end node that is connected to a network node is immediately known to<br />

the other nodes. The network nodes take care of routing the connection<br />

requests through the network. In addition, new node information is<br />

broadcast to all network nodes. With this type of APPN network you would<br />

not have to define all the nodes everywhere in the network.<br />

We did not implement any network node in our OS/2 configuration.<br />

Chapter 3. DRDA Connectivity 81


Figure 55. CM/2: Connections List Window (Host Links)<br />

2. Let us now suppose that we want to connect to the New York system, which<br />

is an MVS system. Select the To host radio button. The link named<br />

MNYCENT appears. Select it and click on Change....<br />

The Adapter List used for that link will display (see Figure 56). If you are<br />

using another type of connection (for example, SDLC), you would have<br />

selected an adapter type other than the token-ring. Figure 56 shows that we<br />

are using the token-ring adapter number 0 for that link.<br />

Figure 56. CM/2: Adapter List Window<br />

3. Click on Continue....<br />

82 DataHub Implementation and Connectivity<br />

The next window that appears is used to define the remote control point<br />

name or the remote node name. Figure 57 on page 83 shows you how we<br />

defined the connection to the central site located in New York.


Figure 57. CM/2: Change Connection to a Host Window<br />

The link name could be any name. For our host link, the LAN destination<br />

address is the 3745 token-ring address as defined in the NCP. Refer to<br />

section 3.4, “MVS Definitions” on page 40 to know how this token-ring<br />

address has been defined. The partner node name is the control point name<br />

of the target destination, which is the value associated with the SSCPNAME<br />

NCP parameter. The Local PU name and the Node ID (hex) are new in the<br />

Communications Manager/2 product. They are used for 3270 terminal<br />

emulation. For the local PU name you need to use the PU name in the OS/2<br />

VTAM definition (Figure 47 on page 74). For the Node ID, use IDBLK and<br />

IDNUM in the same definition.<br />

4. Once you have defined your remote control point, you need to define the<br />

partner LU that will be accessible through that link. Click on Define Partner<br />

LUs....<br />

You will get the Change Partner LUs window. As shown in Figure 58 on<br />

page 84, multiple partner LUs can be accessed through a link. In fact, this<br />

window shows you all the definitions to connect to DB2 and DataHub<br />

<strong>Support</strong>/MVS in New York and to SQL/DS and DataHub <strong>Support</strong>/VM in<br />

Toronto.<br />

5. Select the partner LU with which you want to work. Perform the necessary<br />

changes, and click on Change....<br />

Chapter 3. DRDA Connectivity 83


Figure 58. CM/2: Change Partner LUs Window<br />

6. When finished, click on OK.<br />

84 DataHub Implementation and Connectivity<br />

You will be back at the Change a Connection to a Host window (Figure 57 on<br />

page 83).<br />

7. Click on OK to save the changes.<br />

You will end up at the Connections List Window (Figure 55 on page 82).<br />

8. Click on Close to terminate the definition of the links.<br />

You will end up at the Communications Manager Profile List Sheet window<br />

(Figure 52 on page 79).<br />

Now that you have defined the connection to the MVS host and all partner LUs<br />

associated with it, you could add other links to access other systems, like the<br />

AS/400 or the OS/2. The same steps need to be performed for the peer node as<br />

for the host node.<br />

Table 11 on page 85 shows the partner LU definitions that are needed to<br />

interconnect all the different ITSC systems for the DRDA network. The only<br />

system that is not defined as a partner is the DataHub/2 workstation, because it<br />

is always a requester in a DRDA or DataHub network.


Table 11. OS/2 Partner LU Definitions for the ITSC Network<br />

Partner Token-Ring Address Partner Node Name Partner LU Name Partner LU Alias<br />

Name<br />

New York Central 400008210200 USIBMSC.SCG20 USIBMSC.LUDB23 DBNYCENT<br />

Toronto Car Dealer 400008210200 USIBMSC.SCG20 USIBMSC.LUSQLVMA DBTORDR1<br />

San Jose Car Dealer 400052047158 USIBMSC.SJAS400A USIBMSC.SJA2054I ASJDLR1<br />

Montreal Car Dealer 400052047122 USIBMSC.SJA2018I USIBMSC.SJA2018I OMTLDLR1<br />

Paris Car Dealer 400052047157 USIBMSC.SJA2050I USIBMSC.SJA2050I OPARDLR1<br />

Rio De Janeiro Car<br />

Dealer<br />

400052047116 USIBMSC.SJA2019I USIBMSC.SJA2019I ORIODLR1<br />

Note: The partner LU names for the OS/2 hosts will actually be used for DataHub <strong>Support</strong> connectivity definitions.<br />

Mode Name<br />

A mode definition is required to connect LUs together. The steps required to<br />

add or modify a mode are given below. As with the other SNA definitions, we<br />

start from the Communications Manager Profile List Sheet window (Figure 52 on<br />

page 79).<br />

1. Select the SNA Features line and click on Configure....<br />

The SNA Features List window will display.<br />

2. Select Modes from the Features list box.<br />

The list of all modes defined locally will display as shown in Figure 59.<br />

Figure 59. CM/2: SNA Features List Window (Modes). This window lists all the modes<br />

defined on this workstation.<br />

3. Click on Create... to create a new mode or select one from the modes list<br />

box. Then click on Change....<br />

The Change a Mode Definition window will display (Figure 60 on page 86).<br />

For DRDA or DataHub connection, the class of service that must be selected<br />

is #CONNECT. On this window, you specify the RU size and the Receive<br />

pacing window, whose values are important for performance.<br />

Chapter 3. DRDA Connectivity 85


3.7.5 DRDA Definitions<br />

Figure 60. CM/2: Change a Mode Definition Window<br />

4. Once you have completed the mode definition, click on OK to save it.<br />

You will be presented with the SNA Features List window (Figure 59 on<br />

page 85) with the new mode included in the list of modes.<br />

5. If you have completed all your definitions, click on Close.<br />

The Communications Manager Profile List Sheet window (Figure 52 on<br />

page 79) will appear.<br />

6. If you have completed all your SNA definitions, click on Close.<br />

The Communications Manager Configuration Definition window will appear<br />

(Figure 51 on page 78).<br />

7. Click on Close to terminate and validate the changes that you performed.<br />

8. Click on Close on the Communications Manager Setup window.<br />

Once you have completed the SNA definitions to connect to the different RDBMS<br />

partners, you need to customize the OS/2 <strong>Database</strong> Manager directories to<br />

connect a local database name to both the partner LU and the remote RDB<br />

name.<br />

To define those directories, many interfaces could be used:<br />

• The OS/2 <strong>Database</strong> Manager directory tool<br />

• The command line interface (Extended Services for OS/2) or command line<br />

processor (DB2/2)<br />

• OS/2 procedure<br />

• OS/2 program.<br />

86 DataHub Implementation and Connectivity<br />

We used the OS/2 <strong>Database</strong> Manager directory tool. See Figure 61 on page 87,<br />

which provides you with a sample OS/2 REXX procedure to create the directory<br />

entries.


Recommendation<br />

If you are using the LAN configuration or a common DataHub/2 database, we<br />

recommend that you use REXX procedures to catalog and uncatalog entries.<br />

Once the REXX procedure has successfully executed on one DataHub/2<br />

workstation, it can be executed on the other DataHub/2 workstations.<br />

Figure 61 on page 87 shows you the listing of the REXX procedure we used.<br />

/* REXX command to replicate directory entries to each workstation */<br />

SAY ′ Starting workstation definitions...′ ;<br />

SAY;<br />

CALL DBM ′ CATALOG NetBios NODE WPARDLR1 REMOTE DHSERVER ADAPTER 0 WITH ″OS/2 Paris car dealer″′;<br />

CALL DBM ′ CATALOG NetBios NODE WMTLDLR1 REMOTE DHSERVER ADAPTER 0 WITH ″OS/2 Montreal car dealer″′;<br />

CALL DBM ′ CATALOG NetBios NODE WRIODLR1 REMOTE DHSERVER ADAPTER 0 WITH ″OS/2 Rio car dealer″′;<br />

CALL DBM ′ CATALOG NetBios NODE WNYCENT REMOTE DHSERVER ADAPTER 0 WITH ″MVS New York Central ″′;<br />

CALL DBM ′ CATALOG NetBios NODE WTORDLR1 REMOTE DHSERVER ADAPTER 0 WITH ″VM Toronto car dealer 1″′;<br />

CALL DBM ′ CATALOG NetBios NODE WSJDLR1 REMOTE DHSERVER ADAPTER 0 WITH ″AS/400 San Jose car dealer 1″′;<br />

SAY;<br />

SAY ′ Starting system directory definitions...′ ;<br />

SAY;<br />

CALL DBM ′ CATALOG DATABASE OPARDB AS OPARDB AT NODE WPARDLR1 WITH ″OS/2 Paris car dealer″′;<br />

CALL DBM ′ CATALOG DATABASE OMTLDB AS OMTLDB AT NODE WMTLDLR1 WITH ″OS/2 Montreal car dealer″′;<br />

CALL DBM ′ CATALOG DATABASE ORIODB AS ORIODB AT NODE WRIODLR1 WITH ″OS/2 Rio car dealer″′;<br />

CALL DBM ′ CATALOG DATABASE MNYCENT AS MNYCENT AT NODE WNYCENT WITH ″MVS New York central″′;<br />

CALL DBM ′ CATALOG DATABASE VTORDB AS VTORDB AT NODE WTORDLR1 WITH ″VM Toronto car dealer 1″′;<br />

CALL DBM ′ CATALOG DATABASE ASJDB AS ASJDB AT NODE WSJDLR1 WITH ″AS/400 San Jose car dealer 1″′;<br />

SAY;<br />

SAY ′ Catalog definitions ended′ ;<br />

Figure 61. OS/2 DBM: REXX Procedure to Define Directory Entries. Listing of the REXX<br />

procedure to define all directory entries for the DataHub/2 database clients.<br />

RDB Name<br />

In a DRDA network, an RDB is given a name that is used in the first part of a<br />

three-part name (for example, DB2CENTDIST.PROD.STAFF). This name needs to<br />

be used by the DRDA requester to identify the object it wants to use. Neither<br />

OS/2 <strong>Database</strong> Manager nor DDCS/2 can be a server in a DRDA network. They<br />

are always used as a requester to a DRDA server (DB2, SQL/DS, and AS/400).<br />

That is why there are no RDB names to define in the OS/2 <strong>Database</strong> Manager<br />

environment for DRDA connectivity.<br />

But, as you will see in Chapter 5, “DataHub Tools Conversation Connectivity” on<br />

page 151, the DataHub/2 and DataHub <strong>Support</strong>/2 products associate an RDB<br />

name with each OS/2 <strong>Database</strong> Manager database that needs to be managed.<br />

DataHub <strong>Support</strong>/2 uses the RDB name to connect to the local OS/2 DBM<br />

database alias name.<br />

DRDA or RDS Partners<br />

To define the DRDA and RDS partners, the general steps you need to perform<br />

are:<br />

1. Catalog the workstation that will point to the OS/2 Communications Manager<br />

definitions or the remote OS/2 <strong>Database</strong> Manager server workstation name.<br />

2. Catalog the database in the system database directory.<br />

3. Catalog the database in the database connection services (DCS) directory.<br />

It is important to follow the steps in the exact order listed because it will help<br />

you eliminate some definition problems (for example, typo errors). As Figure 46<br />

on page 71 illustrates, there are many relationships between the OS/2<br />

Chapter 3. DRDA Connectivity 87


Communications Manager and OS/2 <strong>Database</strong> Manager directory definitions. If<br />

you are a beginner in OS/2 <strong>Database</strong> Manager, have this figure near you. We<br />

are sure it will help you.<br />

What follows are detailed steps for and further explanations of each directory<br />

entry definition. To define directory entries using the OS/2 <strong>Database</strong> Manager<br />

directory tools, these are the steps:<br />

1. Start the directory tool program from the OS/2 <strong>Database</strong> Manager<br />

application folder.<br />

The Directory Tool window will appear.<br />

2. Select Directory from the action menu.<br />

The Directory pull-down menu will display (Figure 62). This menu has been<br />

captured on the DataHub database server, which is also used as a DDCS/2<br />

gateway. That is why all the different directories are selectable. The<br />

minimum directory requirement for a DataHub/2 workstation is Workstation<br />

and System <strong>Database</strong>. A standalone DataHub/2 workstation would require<br />

that all four directories be selected unless there are no DRDA servers to<br />

manage in the network, as in scenario 1.<br />

Figure 62. OS/2 DBM: Directory Tool Window. This window is used to manage all the<br />

OS/2 <strong>Database</strong> Manager directories.<br />

The first directory entries you should create are the workstation definitions. The<br />

system directory entries refer to the workstation definitions.<br />

The steps are:<br />

88 DataHub Implementation and Connectivity<br />

1. Select the Directory action from the Directory Tool window action menu<br />

(Figure 62).<br />

2. Select the Workstation line from the Directory pull-down menu.<br />

The Workstation Directory window will appear.<br />

3. Select Workstation from the action menu.<br />

The Workstation pull-down menu will appear.<br />

4. Select Catalog... from the pull-down menu.


The Catalog Workstation window will appear (Figure 63 on page 89). On this<br />

window, you will need to enter:<br />

• A workstation name (alias) that will be referred to in the Catalog<br />

<strong>Database</strong> window. In all our definitions, we used a W in front of the alias<br />

name to make it easy for you to understand the relationship between the<br />

parameter value.<br />

• The protocol used for the connection. For DRDA connection, you always<br />

choose APPN. For RDS connection, you could select either APPN or<br />

NETBIOS. Figure 64 on page 90 shows our choices. For the RDS<br />

connections, we used NETBIOS, which requires the server node name.<br />

• If NETBIOS is selected, the server node name is required<br />

• If the APPN (APPC) protocol is selected, you will need to enter or select<br />

from the list box the:<br />

− Network ID.Partner Logical Unit and the Partner Logical Unit Name<br />

OR<br />

Partner Logical Unit Alias<br />

− Local Logical Unit Alias as shown in the Local node alias name field<br />

on the Local Node Characteristics window (Figure 54 on page 80).<br />

− Transmission Service Mode used for the APPC connection.<br />

If you have completed all the OS/2 Communications Manager definitions, the<br />

workstation definitions will be easy because you will be able to select from<br />

the list box. Because the definitions have already been done, you will have<br />

fewer problems to solve if something goes wrong.<br />

Figure 63. OS/2 DBM: Catalog Workstation Window<br />

Figure 64 on page 90 shows you all the definitions we used to interconnect<br />

all DRDA and OS/2 <strong>Database</strong> Manager hosts. To get the workstation<br />

directory list, use the following command from the OS/2 command line:<br />

DBM LIST NODE DIRECTORY > C:\DBMWS.LST<br />

Chapter 3. DRDA Connectivity 89


Node Directory<br />

Number of entries in the directory = 6<br />

Node 1 entry:<br />

Workstation alias = WNYCENT<br />

Comment = MVS New York Central<br />

Comment code page = 850<br />

Protocol = APPN<br />

Network ID = USIBMSC<br />

Local logical unit alias = ASJA2039<br />

Partner logical unit = LUDB23<br />

Transmission service mode = IBMRDB<br />

Node 2 entry:<br />

Workstation alias = WTORDLR1<br />

Comment = VM Toronto car dealer 1<br />

Comment code page = 850<br />

Protocol = APPN<br />

Network ID = USIBMSC<br />

Local logical unit alias = ASJA2039<br />

Partner logical unit = LUSQLVMA<br />

Transmission service mode = IBMRDB<br />

Node 3 entry:<br />

Workstation alias = WSJDLR1<br />

Comment = AS/400 San Jose car dealer 1<br />

Comment code page = 850<br />

Protocol = APPN<br />

Network ID = USIBMSC<br />

Local logical unit alias = ASJA2039<br />

Partner logical unit = SJA2054I<br />

Transmission service mode = IBMRDB<br />

Node 4 entry:<br />

Workstation alias = WMTLDLR1<br />

Comment = OS/2 Montreal car dealer<br />

Comment code page = 850<br />

Protocol = NetBios<br />

Adapter number = 0<br />

Server NNAME = MTLDBM1<br />

Node 5 entry:<br />

Workstation alias = WPARDLR1<br />

Comment = OS/2 Paris car dealer<br />

Comment code page = 850<br />

Protocol = NetBios<br />

Adapter number = 0<br />

Server NNAME = PARDBM1<br />

Node 6 entry:<br />

90 DataHub Implementation and Connectivity<br />

Workstation alias = WRIODLR1<br />

Comment = OS/2 Rio car dealer<br />

Comment code page = 850<br />

Protocol = NetBios<br />

Adapter number = 0<br />

Server NNAME = RIODBM1<br />

Figure 64. OS/2 DBM: Workstation Directory List<br />

5. Click on Catalog to save the workstation definition.<br />

The Workstation Directory window should reappear with your new<br />

workstation inserted in the list. You can close that window to free up the<br />

Directory Tool window.


Now that the workstation definitions are complete, it is time to catalog the<br />

databases using these steps:<br />

1. Select Directory from the Directory Tool window action menu (Figure 62 on<br />

page 88).<br />

2. Select the System <strong>Database</strong> line from the Directory pull-down menu.<br />

The System <strong>Database</strong> Directory window will appear with a list of all local and<br />

remote databases that have already been defined.<br />

3. Select <strong>Database</strong> from the action menu.<br />

4. Select Catalog... from the <strong>Database</strong> pull-down menu.<br />

The Catalog <strong>Database</strong> window will appear (Figure 65). On this window, you<br />

will enter or select:<br />

• The Alias name of the database. This name will be used by all<br />

applications running on this workstation. If this workstation is a<br />

database server, the requester will need to use that name as its<br />

database name.<br />

• The <strong>Database</strong> name. If this database is located on a DRDA server (for<br />

example, DB2), the database name is the name of the local database in<br />

the DCS Directory. If this database is located on an OS/2 <strong>Database</strong><br />

Manager server, enter the Alias name of the target database.<br />

• Remote/Local database<br />

Note: If you have multiple OS/2 <strong>Database</strong> Manager systems with the<br />

same database name (for example, PERSONAL), you will need to define<br />

a unique alias name for each of those systems. This alias will point to a<br />

local database with the name of PERSONAL.<br />

• If the database is remote, you will need to select the workstation on<br />

which it is located from the list box.<br />

Figure 65. OS/2 DBM: Catalog <strong>Database</strong> Window. This window is used to define the<br />

System <strong>Database</strong> Directory.<br />

Chapter 3. DRDA Connectivity 91


Figure 66 on page 92 shows you all the entries we used to define the<br />

databases that the DataHub server could access. To get this list, use the<br />

following command from the OS/2 command line:<br />

DBM LIST DATABASE DIRECTORY > C:\DBMDIR.LST<br />

System <strong>Database</strong> Directory<br />

Number of entries in the directory = 8<br />

<strong>Database</strong> 1 entry:<br />

<strong>Database</strong> alias = MNYCENT<br />

<strong>Database</strong> name = DCSMNYC<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WNYCENT<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = MVS New York Central<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

<strong>Database</strong> 2 entry:<br />

<strong>Database</strong> alias = VTORDB<br />

<strong>Database</strong> name = DCSVTOR1<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WTORDLR1<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = VM Toronto car dealer 1<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

<strong>Database</strong> 3 entry:<br />

<strong>Database</strong> alias = ASJDB<br />

<strong>Database</strong> name = DCSASJD1<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WSJDLR1<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = AS/400 San Jose car dealer 1<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

<strong>Database</strong> 4 entry:<br />

92 DataHub Implementation and Connectivity<br />

<strong>Database</strong> alias = OMTLDB<br />

<strong>Database</strong> name = OMTLDB<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WMTLDLR1<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = DBM Montreal car dealer<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

Figure 66. (Part 1 of 2) OS/2 DBM: System <strong>Database</strong> Directory


<strong>Database</strong> 5 entry:<br />

<strong>Database</strong> alias = OPARDB<br />

<strong>Database</strong> name = OPARDB<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WPARDLR1<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = DBM Paris car dealer<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

<strong>Database</strong> 6 entry:<br />

<strong>Database</strong> alias = ORIODB<br />

<strong>Database</strong> name = ORIODB<br />

<strong>Database</strong> drive =<br />

<strong>Database</strong> directory =<br />

Workstation alias = WRIODLR1<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = DBM Rio car dealer<br />

Comment code page = 850<br />

Directory entry type = Remote<br />

<strong>Database</strong> 7 entry:<br />

<strong>Database</strong> alias = EMQDB<br />

<strong>Database</strong> name = EMQDB<br />

<strong>Database</strong> drive = D:<br />

<strong>Database</strong> directory =<br />

Workstation alias =<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = DataHub/2 <strong>Database</strong><br />

Comment code page = 850<br />

Directory entry type = Indirect<br />

<strong>Database</strong> 8 entry:<br />

<strong>Database</strong> alias = DHDB<br />

<strong>Database</strong> name = EMQDB<br />

<strong>Database</strong> drive = D:<br />

<strong>Database</strong> directory =<br />

Workstation alias =<br />

<strong>Database</strong> type = OS2 DBM VER. 3.00<br />

Comment = Alias for database EMQDB<br />

Comment code page = 850<br />

Directory entry type = Indirect<br />

Figure 67. (Part 2 of 2) OS/2 DBM: System <strong>Database</strong> Directory<br />

5. In the Catalog <strong>Database</strong> window (Figure 65 on page 91) click on Catalog to<br />

catalog the database.<br />

The System <strong>Database</strong> Directory window will reappear. You can close that<br />

window to free up the Directory Tool window.<br />

Now, let′s finish the exercise by defining the <strong>Database</strong> Connection Services<br />

(DCS) Directory.<br />

1. Select the Directory action from the Directory Tool window action menu<br />

(Figure 62 on page 88).<br />

2. Select the <strong>Database</strong> Connection Services line from the Directory pull-down<br />

menu.<br />

The DCS Directory window will appear with the list of already defined DRDA<br />

RDBMSs.<br />

3. Select <strong>Database</strong> from the Directory Tool action menu.<br />

Chapter 3. DRDA Connectivity 93


94 DataHub Implementation and Connectivity<br />

4. Click on Catalog... from the <strong>Database</strong> pull-down menu.<br />

You are now presented with the Catalog <strong>Database</strong> Window (Figure 68 on<br />

page 95). This window has the same title as the window used to catalog a<br />

System <strong>Database</strong> Directory, but it has different fields.<br />

5. Select the local database you want to define from the list box window. The<br />

list presented here contains all database names that have been defined as<br />

REMOTE in the System <strong>Database</strong> Directory. Do not select OS/2 databases<br />

because they do not support the DRDA protocol.<br />

6. Enter the Target <strong>Database</strong> name. This is the RDB name of the target<br />

database. This parameter is key to be able to connect to the target RDBMS.<br />

7. You can enter the Application Requester name if it is different from the<br />

default. You should not enter any value unless the DDCS/2 documentation<br />

says you should.<br />

8. Enter the parameters values. The field is composed of three values<br />

separated by commas. The format is: ,,<br />

Parms Definition/Value<br />

Transaction program prefix (key connectivity parameter)<br />

Default =X′ 07′<br />

Recommendation<br />

Do not define it.<br />

Transaction program name (key connectivity parameter)<br />

Default = 6DB<br />

Recommendation<br />

Define the TPN only for VM hosts. For a VM host, you need to<br />

enter the TPN as indicated in section 3.5, “VM Definitions” on<br />

page 51. See the Toronto car dealer DCS parameters value<br />

(entry 2) in Figure 69 on page 96.<br />

SQLCODE map file<br />

This entry indicates which file contains the SQLCODE mapping<br />

information. The name varies according to the DRDA host and<br />

the OS/2 <strong>Database</strong> Manager version being used. Refer to the<br />

Distributed <strong>Database</strong> Connection Services/2 Guide to know which<br />

map file should be used for your environment.


Figure 68. OS/2 DBM: Catalog <strong>Database</strong> Window<br />

Figure 69 on page 96 shows you all the entries we used to define the<br />

databases that the DataHub server could access through the DRDA protocol<br />

(DDCS/2). To get this list use the following command from the OS/2<br />

command line:<br />

DBM LIST DCS DIRECTORY > C:\DBMDCS.LST<br />

Chapter 3. DRDA Connectivity 95


<strong>Database</strong> Connection Services (DCS) Directory<br />

Number of entries in the directory = 3<br />

DCS 1 entry:<br />

Local database name = DCSMNYC<br />

Target database name = DB2CENTDIST<br />

Application requestor name =<br />

DCS parameters = ,,C:\SQLLIB\DCS1DSN.MAP<br />

Comment = DCS entry New York Central<br />

Comment code page = 850<br />

DCS directory release level = Ox0100<br />

DCS 2 entry:<br />

Local database name = DCSVTOR1<br />

Target database name = SQLLDLR01<br />

Application requestor name =<br />

DCS parameters = ,SQLVMA,C:\SQLLIB\DCS1ARI.MAP<br />

Comment = DCS entry Toronto Dealer 1<br />

Comment code page = 850<br />

DCS directory release level = Ox0100<br />

DCS 3 entry:<br />

96 DataHub Implementation and Connectivity<br />

Local database name = DCSASJD1<br />

Target database name = SJ400DLR1<br />

Application requestor name =<br />

DCS parameters = ,,C:\SQLLIB\DCS1QSQ.MAP<br />

Comment = DCS entry San Jose dealer 1<br />

Comment code page = 850<br />

DCS directory release level = Ox0100<br />

Figure 69. OS/2 DBM: DCS Directory List<br />

9. Click on Catalog to catalog the new DCS entry.<br />

The <strong>Database</strong> Connection Services Directory window will appear with the<br />

new entry on it.<br />

10. Now that all the catalog entries have been defined, you can close the<br />

Directory Tool window. If you plan to test the connection, you should keep it<br />

open in case of a problem.<br />

OS/2 <strong>Database</strong> Manager Workstation Name<br />

If you are using NETBIOS to connect to other OS/2 <strong>Database</strong> Manager servers,<br />

you must know the workstation names that have been assigned to them during<br />

installation. As you are now well aware, those names must be unique in the<br />

network and they should conform to a certain nomenclature.<br />

The workstation name can be changed any time after installation. But do not<br />

change it after clients are accessing it. To get the actual definition, the following<br />

tools could be used:<br />

• Configuration Tool from the OS/2 <strong>Database</strong> Manager application folder


3.7.6 Binding Applications<br />

• Command line processor.<br />

Figure 70 is a listing of the OS/2 <strong>Database</strong> Manager command:<br />

DBM GET DATABASE MANAGER CONFIGURATION > C:\DBMCONF.LST<br />

<strong>Database</strong> Manager Configuration<br />

Max requester I/O block size (RQRIOBLK) = 4096<br />

Max server I/O block size (SVRIOBLK) = 4096<br />

Max kernel segments (SQLENSEG) = 80<br />

Max no. of active databases (NUMDB) = 8<br />

Workstation name (NNAME) = DHSERVER<br />

Communication heap size (COMHEAPSZ) = 16<br />

Remote services heap size (RSHEAPSZ) = 16<br />

Max remote connections (NUMRC) = 10<br />

Workstation type = Server<br />

<strong>Database</strong> Manager release level = Ox0300<br />

Figure 70. OS/2 DBM: Listing of DataHub/2 <strong>Database</strong> Manager Server Configuration<br />

You could use the same tools to modify the Workstation name. This is the<br />

command that you will use to modify the Workstation name using the OS/2<br />

<strong>Database</strong> Manager command line processor:<br />

DBM UPDATE DATABASE MANAGER CONFIGURATION USING NNAME <br />

You will need to stop OS/2 <strong>Database</strong> Manager before the change is made<br />

effective.<br />

Application programs can be executed after they are bound to a target database.<br />

Before you start DataHub/2 definitions, we recommend that you verify the DRDA<br />

and RDS connections. The interactive tool in the OS/2 environment is the<br />

Command Line Processor. This utility can be bound to DB2/MVS, SQL/DS,<br />

AS/400, and other OS/2 databases.<br />

Binding to DRDA Hosts<br />

To bind the OS/2 <strong>Database</strong> Manager utilities to DRDA hosts, execute the<br />

following DDCS/2 command:<br />

SQLJBIND <br />

It is easier to execute the command from the SQLLIB directory; the command<br />

would be SQLJBIND MNYCENT, for example.<br />

Binding to OS/2 <strong>Database</strong> Manager Hosts<br />

To bind the OS/2 <strong>Database</strong> Manager utilities to different versions of OS/2<br />

<strong>Database</strong> Manager hosts, you will need to execute the following command from<br />

the SQLLIB directory:<br />

SQLBIND @SQLUBIND.LST /K=ALL <br />

Chapter 3. DRDA Connectivity 97


3.7.7 Testing DRDA and RDS Connections<br />

This section provides you with a list of commands you could execute to verify the<br />

DRDA and RDS connections to a target host. The procedures documented in<br />

Table 12 are valid from an OS/2 requester command line. The requester could<br />

be a DataHub/2 workstation. Prerequisites are:<br />

• DRDA and RDS connectivity established<br />

• OS/2 <strong>Database</strong> Manager utilities bound to the target database.<br />

The procedures in Table 12 use the database alias names as found in Figure 66<br />

on page 92.<br />

Table 12. Sample Verification Procedures. These procedures are based on an OS/2<br />

<strong>Database</strong> Manager system accessing the different hosts using the Command Line<br />

Processor.<br />

Target Host Procedure<br />

New York<br />

Central<br />

DB2<br />

Toronto<br />

Car Dealer 1<br />

SQL/DS<br />

San Jose<br />

Car Dealer 1<br />

AS/400<br />

Rio de Janeiro<br />

Car Dealer<br />

OS/2<br />

DBM START USING DATABASE MNYCENT<br />

DBM SELECT * FROM SYSIBM.SYSTABLES<br />

DBM STOP USING DATABASE<br />

DBM START USING DATABASE VTORDB<br />

DBM SELECT * FROM SYSTEM.SYSCATALOG<br />

DBM STOP USING DATABASE<br />

DBM START USING DATABASE ASJDB<br />

DBM SELECT * FROM COLLECTION.SYSTABLES<br />

DBM STOP USING DATABASE<br />

Note: In AS/400 there is a table catalog for each collection.<br />

Also, you need to create a NULLID collection to execute the<br />

above statements.<br />

DBM START USING DATABASE ORIODB<br />

DBM SELECT * FROM SYSIBM.SYSTABLES<br />

DBM STOP USING DATABASE<br />

3.7.8 Managing OS/2 <strong>Database</strong> Manager Directories<br />

98 DataHub Implementation and Connectivity<br />

This section discusses the different procedures that could help you back up,<br />

restore, and copy the different OS/2 <strong>Database</strong> Manager directories used in a<br />

DRDA and DataHub environment.<br />

But first, good management practice calls for documenting the environment.<br />

Figure 71 on page 99 shows a sample REXX procedure that could be used to<br />

print all the different OS/2 <strong>Database</strong> Manager configuration and directory entries<br />

on a PS/2 printer assigned to the address LPT1. You could change the<br />

procedure to route the output to a file.


* REXX command to print directory entries and DBM configuration */<br />

′ CLS′;<br />

SAY ′ Starting Printing of DBM directory and configurations′;<br />

SAY;<br />

′ CALL DBM LIST NODE DIRECTORY > LPT1′;<br />

′ CALL DBM LIST DATABASE DIRECTORY > LPT1′;<br />

′ CALL DBM LIST DCS DIRECTORY > LPT1′;<br />

′ CALL DBM GET DATABASE MANAGER CONFIGURATION > LPT1′;<br />

′ CALL DBM GET DATABASE CONFIGURATION FOR EMQDB > LPT1′;<br />

SAY;<br />

SAY ′ Lists printed′;<br />

Figure 71. OS/2 DBM: REXX Procedure to Print OS/2 <strong>Database</strong> Manager Definitions<br />

Backup and Restore<br />

Because of the number of entries that could be required in the OS/2 <strong>Database</strong><br />

Manager directories (network of hundreds of OS/2 <strong>Database</strong> Manager<br />

databases), we recommend that you back up those directories on all DataHub/2<br />

workstations.<br />

The OS/2 <strong>Database</strong> Manager recovery tool does not perform any backup or<br />

recovery of the DBM directories. Figure 72 on page 100 and Figure 73 on<br />

page 101 provide you with sample OS/2 REXX procedures to back up and<br />

recover OS/2 <strong>Database</strong> Manager directories. Before using the procedures, make<br />

sure you adapt them to your environment.<br />

Chapter 3. DRDA Connectivity 99


100 DataHub Implementation and Connectivity<br />

/* Procedure to backup the DBM directories */<br />

sqllib = ′ C:\SQLLIB′;<br />

backvol = ′ A′<br />

CLS;<br />

SAY;<br />

SAY ′ Make sure you do have a diskette in drive ′ backvol;<br />

PAUSE<br />

SAY;<br />

SAY ′ Creating the sub-directory on drive A:′<br />

′ MKDIR ′ backvol′:\SQLNODIR′;<br />

′ MKDIR ′ backvol′:\SQLGWDIR′;<br />

′ MKDIR ′ backvol′:\SQLDBDIR′;<br />

′ MKDIR ′ backvol′:\SQLDCDIR′;<br />

′ MKDIR ′ backvol′:\SQLDDDIR′;<br />

SAY;<br />

SAY ′ Backup of the DBM directories′;<br />

SAY;<br />

SAY ′ Backup of the Workstation directory′;<br />

′ COPY ′ sqllib′ \SQLNODIR\*.* ′ backvol′:\SQLNODIR′;<br />

SAY;<br />

SAY ′ Backup of the System directory′;<br />

′ COPY ′ sqllib′ \SQLDBDIR\*.* ′ backvol′:\SQLDBDIR′;<br />

SAY;<br />

SAY ′ Backup of the DCS directory′;<br />

′ COPY ′ sqllib′ \SQLGWDIR\*.* ′ backvol′:\SQLGWDIR′;<br />

SAY;<br />

SAY ′ Backup of the Volume C: directory′;<br />

′ COPY C:\SQLDBDIR\*.* ′ backvol′:\SQLDCDIR′;<br />

SAY;<br />

SAY ′ Backup of the Volume D: directory′;<br />

′ COPY D:\SQLDBDIR\*.* ′ backvol′:\SQLDDDIR′;<br />

SAY;<br />

SAY ′ Backup terminated on volume ′ backvol;<br />

Figure 72. OS/2 DBM: REXX Procedure to Back Up the OS/2 <strong>Database</strong> Manager<br />

Directories


* Procedure to recover the DBM directories */<br />

sqllib = ′ C:\SQLLIB′;<br />

backvol = ′ A′<br />

CLS;<br />

SAY;<br />

SAY ′ Make sure you do have a diskette in drive ′ backvol;<br />

PAUSE<br />

SAY;<br />

SAY ′ Recovery of the DBM directories′;<br />

SAY;<br />

SAY ′ Recovery of the Workstation directory′;<br />

′ COPY ′ backvol′ \SQLNODIR\*.* ′ sqllib′:\SQLNODIR′;<br />

SAY;<br />

SAY ′ Recovery of the System directory′;<br />

′ COPY ′ backvol′:\SQLDBDIR\*.* ′ sqllib′ \SQLDBDIR′;<br />

SAY;<br />

SAY ′ Recovery of the DCS directory′;<br />

′ COPY ′ backvol′:\SQLGWDIR\*.*′ sqllib′ \SQLGWDIR′;<br />

SAY;<br />

SAY ′ Recovery of the Volume C: directory′;<br />

′ COPY ′ backvol′:\SQLDCDIR\*.* C:\SQLDBDIR′;<br />

SAY;<br />

SAY ′ Recovery of the Volume D: directory′;<br />

′ COPY ′ backvol′:\SQLDDDIR\*.* D:\SQLDBDIR′;<br />

SAY;<br />

SAY ′ Recovery terminated from volume ′ backvol;<br />

Figure 73. OS/2 DBM: REXX Procedure to Recover the OS/2 <strong>Database</strong> Manager<br />

Directories<br />

Replicating the OS/2 <strong>Database</strong> Manager <strong>Database</strong> Directories<br />

This section discusses the requirement of replicating the OS/2 <strong>Database</strong><br />

Manager directory entries from one DataHub/2 workstation to another.<br />

Replication is required only when you want to add another DataHub/2<br />

workstation. It could be used for either scenario 3 (DataHub database server) or<br />

scenario 2 when you are adding a new stand-alone workstation.<br />

One way to replicate data is to use the backup and recovery procedures<br />

provided in the previous section. Before using the procedures, however, you<br />

must ensure that the two workstations have the same set of local databases and<br />

volume directories. The target system would need to recatalog the local<br />

database if it does not exist in the source system.<br />

Another way to replicate directory entries is to use a REXX procedure with OS/2<br />

<strong>Database</strong> Manager CATALOG commands (see Figure 61 on page 87). To<br />

replicate, you would need to execute the procedure on the target system.<br />

Chapter 3. DRDA Connectivity 101


3.8 Adding a New DRDA Host<br />

3.8.1 DRDA Client<br />

102 DataHub Implementation and Connectivity<br />

This section covers in general the steps required to add another RDBMS to an<br />

existing DRDA network.<br />

A new DRDA host means different things for different platforms. In the case of<br />

DB2, it means adding a new set of address spaces. For SQL/DS, it means<br />

adding a new SQL/DS server machine. For AS/400 you have only one RDBMS<br />

per machine.<br />

The steps required differ for platforms that can act as a DRDA client/server and<br />

for those that can act only as a DRDA client. If a platform can act only as a<br />

DRDA client, it only needs to know the DRDA servers. When a platform can act<br />

as both a DRDA client and server, it needs to define itself (that is, define the<br />

RDB name) and the other DRDA servers. In addition, the platform needs to be<br />

known by the other DRDA partners (that is, clients and servers).<br />

The OS/2 <strong>Database</strong> Manager and the newly announced DB2/6000 fall within the<br />

DRDA-client-only category. The steps listed below are for the OS/2 platform, but<br />

the ideas are applicable to the DB2/6000 platform as well.<br />

The steps required to add a new OS/2 host are:<br />

1. Install OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager<br />

2. Customize OS/2 Communications Manager<br />

• Define the local node characteristics<br />

• Define the standard mode name for DRDA and RDS connectivity<br />

3. Customize OS/2 <strong>Database</strong> Manager<br />

• Define the workstation name for NETBIOS connectivity<br />

4. Define the remote RDBMS to which this system will connect<br />

• Customize OS/2 Communications Manager<br />

− Define the link (token-ring, SDLC, ...)<br />

− Define the local control point (Network ID.Logical Unit)<br />

− Define the partner LU (Network ID.Logical Unit)<br />

• Customize OS/2 <strong>Database</strong> Manager<br />

− Define the workstation alias (NETBIOS or APPN)<br />

− Define the system database alias<br />

− Define the DCS directory (RDB name) if the partner is a DRDA server<br />

• Test the connection to the partners from the new OS/2 host<br />

That is, CONNECT TO (DB2/2) or START USING DATABASE<br />

(<strong>Database</strong> Manager component of Extended Services for<br />

OS/2).<br />

5. If this new OS/2 host is a new version of OS/2 <strong>Database</strong> Manager in the<br />

network, you may want to bind the new OS/2 <strong>Database</strong> Manager utilities to<br />

the different partners:


3.8.2 DRDA Client/Server<br />

• If the partner is an OS/2 host, issue the following command:<br />

SQLBIND @SQLUBIND.LST /K=ALL<br />

• If the partner is a DRDA server, issue the following command:<br />

SQLJBIND <br />

• Test the bind operation by executing a simple SQL statement on each<br />

partner.<br />

6. If this new OS/2 host is accessed by another OS/2 system (for example, a<br />

DataHub/2 workstation) with a different version of OS/2 <strong>Database</strong> Manager<br />

installed:<br />

• You may want to bind the OS/2 <strong>Database</strong> Manager utilities to the new<br />

OS/2 host. Issue the following command from the other OS/2 hosts:<br />

SQLBIND @SQLUBIND.LST /K=ALL .<br />

• Test the bind operation by executing a simple SQL statement on this new<br />

RDBMS.<br />

Adding a new DRDA client/server in a network is a little more complex than<br />

adding a new DRDA client. This new host could be a client to many other DRDA<br />

hosts and a server to many DRDA clients including OS/2 and DB2/6000 hosts.<br />

The steps required to add a new DRDA client/server host are:<br />

1. Install and customize the host RDBMS<br />

• Define the RDB name<br />

• Define the local LU (for example, the VTAM Application ID)<br />

2. Define the new RDBMS host to all other RDBMS hosts (if required)<br />

• Define the new partner LU (Network ID.Logical Unit)<br />

• Define the new RDB name in a local table (for example: OS/2 <strong>Database</strong><br />

Manager workstation, system and DCS directories)<br />

• Test the connection to the new RDBMS (for example: CONNECT TO<br />

(DB2/2)) or START USING DATABASE <br />

(<strong>Database</strong> Manager component of Extended Services for OS/2)<br />

• You may need to bind the different versions of utilities used in the actual<br />

network to the new RDBMS. For example, you may need to bind<br />

different versions of OS/2 <strong>Database</strong> Manager utilities.<br />

3. Define the RDBMS hosts to which this new RDBMS host will connect:<br />

• Define the partner LUs (Network ID.Logical Unit) (for example, DB2<br />

SYSIBM.SYSLUNAMES)<br />

• Define the RDB names in a local table (for example, DB2<br />

SYSIBM.SYSLOCATIONS)<br />

• Test the connection to the RDBMS (for example, CONNECT TO<br />

(DB2))<br />

• If this new RDBMS is a new version of an RDBMS in the DRDA network,<br />

you may need to bind the new version to the partner RDBMS.<br />

Chapter 3. DRDA Connectivity 103


3.9 Recommendations<br />

Our recommendations concerning DRDA connectivity are as follows:<br />

• Implement DataHub products only after your DRDA network is running<br />

successfully.<br />

• Define a nomenclature for:<br />

104 DataHub Implementation and Connectivity<br />

− RDB name<br />

− OS/2 <strong>Database</strong> Manager workstation name (for NETBIOS connection)<br />

− OS/2 <strong>Database</strong> Manager workstation alias name<br />

− OS/2 <strong>Database</strong> Manager database alias name<br />

− OS/2 <strong>Database</strong> Manager DCS database alias name<br />

− OS/2 Communications Manager node alias name<br />

− Logical unit name (if not done already).<br />

• Implement the DataHub/2 workstation as a DRDA participant before installing<br />

DataHub/2. This requires the installation of OS/2 <strong>Database</strong> Manager,<br />

DDCS/2, and the APPC support of OS/2 Communications Manager.<br />

• Perform regular backup of the OS/2 Communications Manager configuration<br />

files.<br />

• Perform regular backup of the OS/2 <strong>Database</strong> Manager directories.<br />

• Put a copy command in the STARTUP.CMD file to copy the<br />

C:\MUGLIB\ACCOUNTS\NET.ACC to another directory for backup purposes.


Chapter 4. DataHub/2 Workstation<br />

This chapter covers the installation and configuration of the DataHub/2<br />

workstation and its prerequisites for:<br />

• Scenario 1: Local Management of Local Data<br />

Scenario 1 has the least number of prerequisites and no installation<br />

dependencies on other machines. We show the required products and the<br />

installation sequence.<br />

• Scenario 2: Central Management of Distributed Data<br />

Scenario 2 is similar to scenario 1, with the addition of DDCS/2 for<br />

cross-platform DRDA. The installation sequence is not dependent on the<br />

other platforms.<br />

• Scenario 3: Distributed Management of Distributed Data<br />

Installation of DataHub/2 in scenario 3 requires a more complex operating<br />

environment. We show:<br />

− Installation environment preparation<br />

− DataHub/2 requester installation<br />

− Connectivity parameters for:<br />

- LAN<br />

- OS/2 <strong>Database</strong> Manager<br />

- DataHub/2 database.<br />

We also discuss LAN design considerations and explain how your design can<br />

affect performance.<br />

Depending on your requirements, prerequisite products can include some or all<br />

of the following:<br />

• OS/2 Communications Manager<br />

• OS/2 <strong>Database</strong> Manager client<br />

• OS/2 <strong>Database</strong> Manager server<br />

• DDCS/2<br />

• LAN requester<br />

• LAN server.<br />

For each scenario we show:<br />

• The products required for each machine<br />

• The installation procedure<br />

• A process flow between products and machines<br />

When appropriate, we present some configuration tricks or other items that may<br />

not be obvious.<br />

© Copyright IBM Corp. 1993 105


DataHub/2 Prerequisites: Installation Sequence<br />

Install the required components in the following sequence for each machine<br />

(you may not need all of these products in your environment):<br />

• Base operating system<br />

• OS/2 Communications Manager*<br />

• LAPS*<br />

• OS/2 <strong>Database</strong> Manager<br />

• LAN server or requester<br />

• CSDs or ServicePak.<br />

4.1 Component Configuration<br />

106 DataHub Implementation and Connectivity<br />

*LAPS and OS/2 Communications Manager sequence. The latest version of<br />

LAPS is packaged with NTS/2, a separate product. The NTS/2 product is<br />

included in LAN Server Versions 2 and 3. OS/2 Communications Manager in<br />

Extended Services for OS/2 provides a version of LAPS. Install LAPS from<br />

NTS/2 after OS/2 Communications Manager if you use Extended Services for<br />

OS/2. This ensures that the older version of LAPS is overlaid with the newer<br />

version. Communications Manager/2 does not contain any LAPS function.<br />

Install LAPS from NTS/2 before Communications Manager/2. You can now<br />

install the latest or your current version of LAN server and finally, the correct<br />

CSDs or ServicePak for your environment.<br />

We now show how the OS/2 products link to the other OS/2 prerequisites and<br />

how to define the links within each product. We end up with a stack of products,<br />

with the prerequisite of a layer below the layer requiring it.<br />

The top of the stack is the DataHub/2 application. The bottom is the physical<br />

network. See Figure 74 on page 107. We omit the LAN server and requester<br />

from the diagram because they only provide redirection for file access and do<br />

not require direct links to the other products.


Figure 74. Major Connectivity Layers<br />

We discuss below the parameters that link components and show how the entry<br />

against a parameter at one level relates to the name of a parameter at another<br />

level. Eventually we can create a network address.<br />

What Is an Alias?<br />

We use the term alias several times in the next few paragraphs. An alias is the<br />

label for a list of definitions for local or external components. An alias is only<br />

known within the local OS/2 system.<br />

User Profile Management<br />

Before a user even gets to DataHub/2, User Profile Management (UPM) verifies<br />

the originator′s access to system resources. The local user ID is used to refer to<br />

a set of user profiles on the DataHub/2 workstation. A profile is not necessary<br />

but is used to facilitate logon to managed hosts. Refer to 7.1.1, “User Profile<br />

Management” on page 241 for a detailed explanation of security. The password<br />

Chapter 4. DataHub/2 Workstation 107


associated with the local UPM user ID is used to encrypt the passwords stored in<br />

the DataHub/2 workstation profile. A local ID can have several profiles, so a<br />

user can select different sets of management tasks for each entry into<br />

DataHub/2.<br />

The DataHub/2 Connect Request<br />

DataHub/2 has a list of names representing the physical location of each<br />

managed host. These names are aliases and are tied to a symbolic destination.<br />

Symbolic destination definition is expanded in OS/2 Communications Manager.<br />

Within a host alias, DataHub/2 lists the RDBs. The database name entry includes<br />

reference to the database alias name as it is cataloged in OS/2 <strong>Database</strong><br />

Manager. The DataHub/2 definitions are stored in an OS/2 <strong>Database</strong> Manager<br />

database owned by DataHub/2. So we have two links external to DataHub/2: the<br />

symbolic destination and the database alias name.<br />

OS/2 <strong>Database</strong> Manager Function<br />

DataHub/2 uses a database to store definitions of hosts, RDBs, and their<br />

symbolic destinations. Once the connection is made, DataHub/2 can extract<br />

additional definitions from the RDB itself. The OS/2 <strong>Database</strong> Manager<br />

directories have more work to do than DataHub/2 in creating the logical<br />

connection to the remote database. OS/2 <strong>Database</strong> Manager uses the system<br />

database and workstation directories to do this. If the RDB is not local, OS/2<br />

<strong>Database</strong> Manager can use its own RDS component to connect to other OS/2<br />

<strong>Database</strong> Managers or DDCS/2 to connect to other platforms.<br />

System <strong>Database</strong> Directory: You also assign an alias for the name you want to<br />

refer to the database in OS/2 <strong>Database</strong> Manager. A local application accessing<br />

the database will connect to this alias. The system database directory also<br />

contains the definition for the actual name of the database, as it is known by<br />

OS/2 and, optionally, a workstation alias.<br />

Workstation Directory: This term can be confusing because it does not define a<br />

physical terminal but rather the name of a link from the database manager to a<br />

communications connection. A database in the system database directory can<br />

only have one workstation alias associated with it, but a workstation, being a<br />

connection to another database manager, could be used to connect to several<br />

databases on one database manager. The workstation directory has as many<br />

entries as there are separate remote connections. If a database connection is<br />

provided from a local application to another OS/2 database system, the<br />

connection is made through the RDS component of OS/2 <strong>Database</strong> Manager.<br />

The protocols used are:<br />

• RDS<br />

− NETBIOS or LU 6.2 protocols<br />

• Cross-platform connections<br />

− LU 6.2 only.<br />

108 DataHub Implementation and Connectivity<br />

DDCS/2 Connectivity Function<br />

Definitions within DDCS/2 are straightforward. DDCS/2′s primary job is to<br />

provide the DRDA function for the OS/2 <strong>Database</strong> Manager and cross-platform<br />

consistency.<br />

If the database to which you are trying to connect is on a different platform, enter<br />

the name of the local database as defined in the local system database directory


and the name of the target database as defined at the remote site. DDCS/2 is<br />

not used to access databases on other OS/2 platforms. OS/2 uses RDS for like<br />

platforms.<br />

OS/2 Communications Manager Function<br />

OS/2 Communications Manager provides the logical connection between<br />

applications and the adapters connected to the physical network. This includes<br />

configuration of the application description and destination. All the data from the<br />

configuration of the different components is funnelled through OS/2<br />

Communications Manager and passes to the network adapter. When combined<br />

with the protocol selection and signal generation provided by LAPS, the network<br />

delivers a string of bits to the designated receiver.<br />

LAPS Function<br />

The NETBIOS definition requires you to enter the remote server′s node name<br />

and is only supported within the LAN environment. APPC connections are<br />

references within the workstation directory to entries in the local OS/2<br />

Communications Manager. If your environment uses SDLC or X.25 as a physical<br />

network, you must replace the LAPS component of our samples with the<br />

appropriate configuration and drivers. The majority of the definitions in all the<br />

layers above do not change. Any changes will be driven by compatibility<br />

requirements with the receiving application′s network.<br />

Application Server<br />

The reverse process occurs at the receiver. When the receiver recognizes its<br />

network address in the string of bits in the network adapter, it passes the<br />

message up through a parallel set of layers. Requests are directed through the<br />

communications protocols and are presented to the application servers. We<br />

have now connected to the remote application.<br />

Chapter 4. DataHub/2 Workstation 109


4.2 Scenario 1: Local Management of Local Data<br />

4.2.1 Required Products<br />

You have an OS/2 database and would like to run a pilot using DataHub/2. You<br />

can use DataHub/2 to copy tables from the managed host, perform utilities, and<br />

manage authorizations. The DataHub/2 workstation, the managed host, and the<br />

target of copy functions are on different machines. See Figure 75.<br />

Figure 75. Scenario 1: Local Management of Local Data<br />

The minimum required levels and versions of products are listed below. For<br />

instance, where the list shows OS/2 <strong>Database</strong> Manager, DB2/2 could be installed<br />

instead.<br />

DataHub/2 Workstation<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager client<br />

• LAPS<br />

• DataHub/2<br />

110 DataHub Implementation and Connectivity


Managed Host (Source Data)<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager server<br />

• LAPS<br />

Target<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager server and<br />

client<br />

• LAPS<br />

• DataHub <strong>Support</strong>/2<br />

4.2.2 Installation Procedure and Configuration<br />

4.2.3 Process Flow<br />

This is the simplest DataHub scenario to install. The products install on each<br />

machine with no dependency on the install status of the other machines.<br />

Obviously all installations must be complete before DataHub executes.<br />

The DataHub/2 programs, files, messages, and designated database (DDB) are<br />

local to the DataHub/2 workstation. We can use all of the DataHub/2 tools<br />

features to manage the host. We manage the host by using it as a source only.<br />

We cannot use the host as a target for any copy function because DataHub<br />

<strong>Support</strong>/2 is not installed on the source machine. DataHub <strong>Support</strong>/2 is installed<br />

on the target machine. DataHub/2 connects to DataHub <strong>Support</strong>/2 using LU 6.2<br />

tools conversations. DataHub <strong>Support</strong>/2 accesses the local OS/2 <strong>Database</strong><br />

Manager, which then uses RDS to connect to the OS/2 <strong>Database</strong> Manager on the<br />

managed host.<br />

The DataHub/2 workstation must be at least an OS/2 <strong>Database</strong> Manager client.<br />

We do not need it to be a database server in this scenario. DataHub/2 is<br />

installed as a server for stand-alone operation. We install both the platform and<br />

tools features, with access to the DataHub/2 database.<br />

The target machine must have OS/2 <strong>Database</strong> Manager installed as both a<br />

server, to talk to the DataHub/2 workstation, and a client, to talk to the managed<br />

host.<br />

The managed host must be a OS/2 <strong>Database</strong> Manager server to receive RDS<br />

requests from the target and the DataHub/2 workstation. We do not need client<br />

function.<br />

See 1.2, “DataHub Data Flows: Overview” on page 6 for a more detailed<br />

presentation of flows for each DataHub function.<br />

Let′s assume we want to use DataHub/2 to copy the sample database tables<br />

from the host to the target. The database directory entries for each machine are<br />

shown in Figure 76 on page 112. This is the flow:<br />

• DataHub/2 reads the catalog data from the host and the target to verify that<br />

the objects exist at the source and do not exist at the target. RDS is used to<br />

connect to both host and target machines.<br />

Chapter 4. DataHub/2 Workstation 111


112 DataHub Implementation and Connectivity<br />

• DataHub/2 then generates the DDL for these tables and creates the objects<br />

on the target using RDS if the objects do not already exist.<br />

• DataHub/2 on the DataHub/2 workstation sends the copy request to DataHub<br />

<strong>Support</strong>/2 on the target. This is a tools conversation flow, and therefore the<br />

protocol used is LU 6.2, configured for the symbolic destination of the target<br />

database.<br />

• DataHub <strong>Support</strong>/2 then sends an SQL select statement to the local OS/2<br />

<strong>Database</strong> Manager, which uses RDS to connect to the host. This is not a<br />

DataHub specific flow, just standard RDS to the OS/2 <strong>Database</strong> Manager on<br />

the host.<br />

• OS/2 <strong>Database</strong> Manager on the host runs the select statement and sends the<br />

result back to the target using RDS.<br />

• DataHub <strong>Support</strong>/2 receives the result of the select statement and inserts the<br />

rows into the local table. A completion code is returned to DataHub/2 at the<br />

DataHub/2 workstation.<br />

Figure 76. Scenario 1: <strong>Database</strong> Catalogs and RDB Name Map File<br />

DataHub/2 Workstation<br />

This DataHub/2 workstation configuration shows the simplicity of cataloging the<br />

databases and configuring DataHub/2.<br />

OS/2 <strong>Database</strong> Manager: There is no difference in the cataloging of the remote<br />

databases just because they are used by DataHub/2. The local aliases relate to<br />

workstation names that link to the remote machines using RDS. Both remote<br />

database managers must be installed as servers.


DataHub/2: The configuration lists two RDBs: DBMMTLDLR1 and DBMRIODLR1.<br />

The database names are added to the definitions within the RDB and relate to<br />

local database aliases in the OS/2 <strong>Database</strong> Manager catalog.<br />

Target<br />

You configure the local OS/2 <strong>Database</strong> Manager and DataHub <strong>Support</strong>/2 on this<br />

machine.<br />

OS/2 <strong>Database</strong> Manager: Standard catalog procedure applies. One database is<br />

local and one is remote on the managed host.<br />

DataHub <strong>Support</strong>/2: The only item we discuss in this installation is the DataHub<br />

<strong>Support</strong>/2 map file.<br />

There is no configuration pull-down menu for DataHub <strong>Support</strong>/2. There is also<br />

no such thing as an RDB in OS/2 <strong>Database</strong> Manager as there is on the other<br />

database managers. OS/2 uses an alias as the application′s label for the<br />

database. If the alias name of the DataHub <strong>Support</strong>/2 host′s database matches<br />

the RDB name of the DataHub/2 workstation, a map file is not necessary. If the<br />

names are different, a map file is required, but only on the DataHub <strong>Support</strong>/2<br />

machine. The map file relates an RDB name on the DataHub/2 workstation to an<br />

OS/2 <strong>Database</strong> Manager alias on the DataHub <strong>Support</strong>/2 machine. This ensures<br />

compatibility among the database managers, DataHub/2, DataHub <strong>Support</strong>/2, and<br />

DataHub <strong>Support</strong> on all platforms.<br />

DataHub <strong>Support</strong>/2 RDB Name Map File<br />

The map file is a flat ASCII file with one entry for each RDB and local alias<br />

pair.<br />

• Path to the map file<br />

− The OS/2 CONFIG.SYS file points to DataHub <strong>Support</strong>/2 configuration<br />

by using system variable EMQ2SYS.<br />

− The file named in EMQ2SYS has DataHub keyword records, one being<br />

RDBNAMEMAP.<br />

− RDBNAMEMAP lists the filename of the map.<br />

• Map file contents<br />

− Column 1 of the map file contains the RDB name (18 characters<br />

maximum).<br />

− Column 20 contains the local alias name (8 characters maximum).<br />

We now have a link from the DataHub/2 RDB name to a local alias at DataHub<br />

<strong>Support</strong>/2. The local alias defines an entry in the workstation directory that links<br />

to the remote database, Sample, on the managed host.<br />

Managed Host<br />

DataHub/2 has no impact on this machine′s configuration. It is set up as an<br />

OS/2 <strong>Database</strong> Manager server.<br />

Chapter 4. DataHub/2 Workstation 113


4.3 Scenario 2: Central Management of Distributed Data<br />

4.3.1 Required Products<br />

Scenario 2 is similar to scenario 1, with the addition of hosts other than OS/2.<br />

See Figure 77. The OS/2 target machine shown in scenario 1 is removed<br />

because the AS/400, MVS, and VM hosts participate as both source and target<br />

machines for OS/2 and each other. In other words, the target machine of<br />

scenario 1 can be a target for the non-OS/2 hosts and a source. We must install<br />

DDCS/2 to provide DRDA connectivity to hosts on non-OS/2 systems. The<br />

multiuser version of DDCS/2 requires OS/2 <strong>Database</strong> Manager to be installed as<br />

a server. Single-user DDCS/2 can operate using either stand-alone or server<br />

versions of OS/2 <strong>Database</strong> Manager.<br />

Figure 77. Scenario 2: Central Management of Distributed Data<br />

Our discussion is limited to the installation of OS/2 components, not the OS/400,<br />

MVS, or VM components required for this scenario.<br />

DataHub/2 Workstation<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager client<br />

• LAPS<br />

• DDCS/2 single-user version<br />

• DataHub/2<br />

114 DataHub Implementation and Connectivity


Managed OS/2 Host<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager server and<br />

client<br />

• DDCS/2 if source or target of cross-platform function<br />

• LAPS<br />

• DataHub <strong>Support</strong>/2<br />

4.3.2 Installation Procedure and Configuration<br />

4.3.3 Process Flow<br />

The OS/2 component installation is the same as in scenario 1, with the addition<br />

of DDCS/2.<br />

OS/2 <strong>Database</strong> Manager catalogs are similar to those in scenario 1. A map file<br />

is still required if the RDB and alias names on the DataHub <strong>Support</strong>/2 host are<br />

different. The change in this scenario is the addition of cross-platform DRDA.<br />

RDS took care of access to remote databases in scenario 1. We add DDCS/2 to<br />

one of the OS/2 systems to provide this connectivity. DDCS/2 provides the<br />

DRDA flow plus the <strong>Database</strong> Connection Services directory to relate the local<br />

database alias to a remote non-OS/2 RDB name. Figure 78 shows our DCS<br />

directory for the three non-OS/2 hosts.<br />

Figure 78. <strong>Database</strong> Connection Services Directory<br />

We use the same function, copy table, as in scenario 1, except that the source<br />

and target machines are on different platforms:<br />

• DataHub/2 at the DataHub/2 workstation checks the database catalogs at the<br />

source and target machines. The flow to the OS/2 target is RDS. DDCS/2<br />

provides the DRDA link to the non-OS/2 host.<br />

Chapter 4. DataHub/2 Workstation 115


116 DataHub Implementation and Connectivity<br />

• DataHub/2 then issues the command to copy the table. The command is<br />

passed to the DataHub <strong>Support</strong>/2 target host using a tools conversation, LU<br />

6.2.<br />

• DataHub <strong>Support</strong>/2 receives the request, resolves the target RDB name<br />

using the map file if necessary, and passes the select statement to the<br />

source host through DDCS/2. In our case DDCS/2 is local on the target<br />

machine. It could also be on another machine.<br />

• DDCS/2 establishes the DRDA link to the remote RDBMS using LU 6.2.<br />

• DataHub <strong>Support</strong>/2 on the target host then receives the data through the<br />

DRDA link and inserts the data into the local tables.<br />

• DataHub <strong>Support</strong>/2 on the target host then forwards a completion code to the<br />

requesting DataHub/2.


4.4 Scenario 3: Distributed Management of Distributed Data<br />

When you implement DataHub as we do in this scenario (see Figure 79), you<br />

increase the number of products required. You also change the configuration of<br />

some of the components in the other scenarios, such as LAN links and<br />

connections to the server′s databases. We therefore present more sections with<br />

more details for this scenario. The sections are:<br />

• Required Products<br />

• DataHub/2 Execution: Data Requirements<br />

• Installation Environment Preparation<br />

• DataHub/2 Workstation Installation as a Requester<br />

• LAN Connectivity Parameters<br />

• OS/2 <strong>Database</strong> Manager Connectivity Parameters<br />

• DataHub/2 <strong>Database</strong> Connectivity Parameters.<br />

Figure 79. Scenario 3: Distributed Management of Distributed Data<br />

Normally you install a DataHub/2 workstation with local program code, data, and<br />

designated database. But if you need several DataHub/2 workstations, why<br />

duplicate all the files, databases, and catalogs? The OS/2 LAN server and OS/2<br />

<strong>Database</strong> Manager RDS can provide access to the data DataHub/2 needs to<br />

operate as a DataHub/2 workstation.<br />

To scenario 2 we add centralized DataHub/2 program code on a LAN server.<br />

The server is also a DataHub/2 workstation and provides a central DataHub/2<br />

database. DataHub/2 workstations not on the server are DataHub/2 requesters.<br />

Chapter 4. DataHub/2 Workstation 117


4.4.1 Required Products<br />

Install the products listed below on each machine. Note that you must provide<br />

the proper LAN requester, server, and OS/2 <strong>Database</strong> Manager connections<br />

before you begin the DataHub/2 workstation installation.<br />

DataHub/2 Workstations (Requesters)<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager client<br />

• LAPS<br />

• LAN requester<br />

• DataHub/2 requester<br />

Managed OS/2 Host<br />

In scenario 3, the OS/2 managed host also contains the central DataHub/2<br />

database and all the code and files to be shared with the requesters. We can<br />

also use the machine as a DataHub/2 workstation.<br />

• OS/2 version 2.0<br />

• OS/2 Communications Manager and OS/2 <strong>Database</strong> Manager client and<br />

server<br />

• DDCS/2<br />

• DataHub/2<br />

• DataHub <strong>Support</strong>/2<br />

• LAPS<br />

• LAN Server<br />

4.4.2 DataHub/2 Execution: Data Requirements<br />

Let′s set the stage by showing how DataHub/2 operates as a requester.<br />

DataHub/2 executes using one database and five types of files. The database<br />

contains the configuration of the managed hosts, including host name, symbolic<br />

destination name, and database alias name.<br />

The files are:<br />

118 DataHub Implementation and Connectivity<br />

• Data files. Installation history, configuration, and the name of the DataHub/2<br />

database.<br />

• Executable files. Programs for the various functions.<br />

• Dynamic link library. Code and resources that can be loaded for execution.<br />

• Help files. Press F1 to see contextual help during configuration or execution.<br />

• Include files. Header and import libraries used by tool builders and the<br />

sample tool included with the tools feature.<br />

The database and any of these files can be local or on a server. The<br />

combinations may seem endless, but several options are realistic, as described<br />

below.


DataHub/2 <strong>Database</strong> Placement<br />

A central database provides consistent definitions to all DataHub/2 workstations.<br />

An administrator who needs unique access may want a local database, or a test<br />

and development department may want to be independent of central control. A<br />

requester may even have databases in different locations for different instances<br />

of DataHub/2 on one machine. This would be two logical DataHub/2<br />

workstations. If your administrators manage the same hosts, you are a good<br />

candidate for one central DataHub/2 database. See Section 4.5, “LAN Design<br />

Considerations” on page 144 for more details on database placement.<br />

File Placement<br />

You can use the same decision process for file placement, with the following<br />

additional information: Data files are updated by each DataHub/2 process. This<br />

includes the log file, EMQCELOG.0n0, where n is a number representing the<br />

instance of DataHub/2 that created the log. A central log may be more difficult<br />

to use for problem determination if there are several DataHub/2 workstations<br />

because there is no userid associated with a log. But a local log may be more<br />

difficult for the central administrator to access. You could have the best of both<br />

situations by having the user′s local data files on the LAN server. Files would<br />

then be associated with the user and accessible by the LAN administrator.<br />

You probably want to store one copy of the DLLs and help files centrally. If you<br />

want your administrators to develop their own tools and utilities, you could make<br />

these files local to each DataHub/2 workstation.<br />

Now that we know how DataHub/2 accesses the data we can understand what<br />

we must do to make the files and database available before we install<br />

DataHub/2.<br />

4.4.3 Installation Environment Preparation<br />

The install process either creates a new DataHub/2 database, uses an existing<br />

local DataHub/2 database, or uses an existing cataloged remote database. The<br />

first two options simplify the prerequisites needed to install a DataHub/2<br />

requester. If you want to use a central database on the server, you must have<br />

access to it before you start to install a DataHub/2 requester because the<br />

installation checks to see whether the database selected is in fact a DataHub/2<br />

database. This is true for the files also, because the install process updates<br />

some of the files on the server. For our purposes in this scenario, we assume<br />

that the DataHub/2 code installed on the DataHub/2 workstation in scenario 2<br />

becomes our DataHub/2 server. We show the steps you must complete to install<br />

a DataHub/2 requester that has data files on the local drive and common<br />

DataHub/2 files and the DataHub/2 database at the server.<br />

DataHub/2 Server<br />

LAN Server Administration Simplified: The LAN server provides resource<br />

sharing to the LAN requesters. Resources in our scenarios are files or printers.<br />

The LAN administrator gives each resource an alias name. An access profile is<br />

then defined for each alias, and users are given access to the resource. We<br />

highly recommend that you define groups of users and give resource access to<br />

groups rather than individual users. In this way you can just add a user to a<br />

previously defined group, and the user has all accesses, with very little effort on<br />

the part of the administrator.<br />

Chapter 4. DataHub/2 Workstation 119


LAN Server Access Profile Tip<br />

Versions of the LAN server over the years have changed the defaults for<br />

access. Early version defaults gave access to everything for all users.<br />

Newer version defaults are not as generous. Access must be defined, or a<br />

user does not get to a resource. This applies to subdirectories. If you<br />

organize your server so that DLLs and files are in subdirectories, you must<br />

use the APPLY function within the access profile definition menu. This gives<br />

users access to data in subdirectory levels below the directory that the<br />

access profile defines.<br />

On the LAN server you define aliases, define access profiles, and grant users<br />

access, as follows:<br />

1. LAN server must be running during requester install.<br />

2. Define an alias for the directory on the server where the DataHub/2 shared<br />

files reside. (The EMQDIR directory is the default.) We used the alias<br />

DATAHUB and assigned drive H.<br />

3. Define access profiles for the alias.<br />

We used access type XR, to give DataHub/2 LAN requesters execute and<br />

read access to the central files. The requesters cannot delete the DataHub/2<br />

system files with this restricted access. If the user′s log and error files are<br />

on the server, access must be RCD (read, create, and delete) for the<br />

directory that contains the user′s files.<br />

4. Add userids to LAN server UPM for requesters.<br />

5. Assign logon assignments for users. We recommend groups if there are<br />

several users.<br />

OS/2 <strong>Database</strong> Manager: The DataHub/2 database on the DataHub/2 server<br />

must be available to the DataHub/2 requester during the requester installation<br />

process. OS/2 <strong>Database</strong> Manager on the server provides this access through<br />

RDS. OS/2 <strong>Database</strong> Manager must be running as a server, and access to the<br />

DataHub/2 database tables must be granted for the new requesters. To grant<br />

access use the sample command file, EMQGRANT.CMD, provided in the<br />

executable files directory by the install procedure. To use it, enter: EMQGRANT<br />

database_name user_type userid_list, where the parameters are:<br />

• database_name is the name of the DataHub/2 database<br />

• user_type is<br />

− USER a DataHub/2 user with no additional authority<br />

− ADMIN a DataHub/2 user with authority to grant, install tools, and<br />

customize functions.<br />

DataHub/2 Requester<br />

You must complete the following steps at the requester before you start to install<br />

DataHub/2:<br />

1. Start LAN requester.<br />

2. Sign on to domain.<br />

120 DataHub Implementation and Connectivity<br />

3. Start OS/2 <strong>Database</strong> Manager.<br />

4. Catalog the remote DataHub/2 database


• System database directory<br />

• Workstation directory<br />

− NETBIOS server node name<br />

or<br />

− APPC LU and PLU definitions.<br />

5. Stop and start OS/2 <strong>Database</strong> Manager to activate the changes.<br />

Figure 80 shows the requester′s directories for the central DataHub/2 database<br />

stored on the server. The installer on the requester machine must have DBADM<br />

authority to use the DataHub/2 database on another workstation. DHDB is the<br />

alias and the local database name on the requester. The workstation directory<br />

defines the NETBIOS link from the local database to the OS/2 <strong>Database</strong> Manager<br />

node name, DHSERVER, which is the managed host and LAN server.<br />

Figure 80. OS/2 <strong>Database</strong> Manager Directories<br />

You now have access to the files and DataHub/2 database on the server.<br />

4.4.4 DataHub/2 Workstation Installation as a Requester<br />

To install DataHub/2 as a requester:<br />

1. Log on locally (Logon /l).<br />

2. Insert DataHub/2 diskette 1<br />

Only one diskette, the Platform diskette, is needed for a requester<br />

installation. The remainder of the code is already installed on the server.<br />

3. Type A:Install.<br />

Figure 81 on page 122 through Figure 89 on page 127 show the installation<br />

screens and how to access files on the server.<br />

Figure 81 on page 122 shows the first screen you see after the installation<br />

Logo. We store the code on the server, so we only use the platform feature.<br />

Chapter 4. DataHub/2 Workstation 121


Figure 81. DH/2 Install: Feature Select<br />

122 DataHub Implementation and Connectivity<br />

4. Click on Platform Feature and the Continue push button.<br />

Figure 82. DH/2 Install: Installation Type<br />

5. Click on Requester Installation and the Continue push button. The server<br />

option is for stand-alone or server installations. The Target Directories<br />

screen (Figure 83 on page 123) will display.


Figure 83. DH/2 Install: Target Directories<br />

The screen in Figure 83 shows how to define the target directories for local<br />

data files with all common files on the server. Note that these files are<br />

accessed and verified during the install, so you must have the LAN requester<br />

and server active and the requester logged on. In our case we assigned the<br />

common files at the server to a user′s H: disk. This disk is really the<br />

EMQDIR on the server′s C: disk. We used subdirectories for each file type in<br />

our installation, but this is not necessary. (If you use subdirectories, do not<br />

forget to APPLY the access profiles in LAN server to the subdirectories.) We<br />

recommend that you leave all files in the EMQDIR directory.<br />

6. Update the files entries for your configuration. Click on the Continue push<br />

button to get to the Update Configuration screen (Figure 84 on page 124)<br />

where you can change the default for the DataHub/2 environment variable.<br />

This variable points to the directory where the DataHub/2 data files are<br />

stored.<br />

Chapter 4. DataHub/2 Workstation 123


124 DataHub Implementation and Connectivity<br />

Figure 84. DH/2 Install: Update Configuration<br />

On the DataHub/2 <strong>Database</strong> Name screen (Figure 85 on page 125) you enter<br />

the local alias for the DataHub/2 database. In our case this points to the<br />

server through the OS/2 <strong>Database</strong> Manager directories. Note that the install<br />

procedure accesses the database to ensure it is valid. Therefore we need to<br />

have OS/2 <strong>Database</strong> Manager running both locally and at the server, with<br />

the database cataloged and present on the server.<br />

7. Click on the Continue push button to enter the alias and see the next screen.


Figure 85. DH/2 Install: DataHub/2 <strong>Database</strong> Name<br />

The remaining screens are straightforward and do not require further<br />

explanation.<br />

Figure 86. DH/2 Install: Delimiter Characters for Commands<br />

Chapter 4. DataHub/2 Workstation 125


126 DataHub Implementation and Connectivity<br />

Figure 87. DH/2 Install: LAN Workstation Identification<br />

Figure 88. DH/2 Install: Add Program to Desktop


Figure 89. DH/2 Install: Installation Confirmation<br />

4.4.5 LAN Connectivity Parameters<br />

8. Reboot your requester workstation to activate the CONFIG.SYS file changes.<br />

9. You can now click on the DataHub/2 icon to start configuring the DataHub/2<br />

workstation.<br />

This section introduces the terminology for the configurable LAN components<br />

that are internal to each machine in your network. You can configure most of<br />

these components to suit the size and function of your machines. Defaults satisfy<br />

most operating conditions. For instance, if a LAN requester attempts to log on to<br />

a server, and the server is not active, the requester′s network drivers wait a<br />

certain amount of time and retry. The application is notified that the server is<br />

not responding after all the retries fail. You do not normally adjust the wait time<br />

or retry count. You should, however, adjust the number of requesters a server<br />

can support if your network is larger than the defaults can handle. Servers can<br />

theoretically handle over 1000 requesters, but the response time at any<br />

requester could be unpredictable.<br />

LAPS defines and configures all NETBIOS and 802.2 resources. LAPS places the<br />

definitions in the PROTOCOL.INI file, which is read when the device drivers are<br />

loaded.<br />

Chapter 4. DataHub/2 Workstation 127


Tuning versus Execution<br />

128 DataHub Implementation and Connectivity<br />

The discussion that follows is not presented as an exhaustive reference for<br />

LAN tuning purposes. It is meant to introduce you to the LAN parameters<br />

terminology and to give you a basic understanding of the interrelationships<br />

among parameters. Once you have this information, you can determine the<br />

demands your environment places on the resources and adjust the<br />

parameters to make DataHub/2 work. If you increase the value of a<br />

parameter more than necessary, two things can happen:<br />

• You get a configuration error message that tells you to lower the<br />

parameter or raise another so it fits.<br />

• Performance may suffer because another resource has been reduced<br />

dynamically to make yours fit.<br />

Refer to the Parmtune guide in the IBM marketing tools repository if you want<br />

to tune all OS/2 components for maximum performance.<br />

The LAN adapter card provides the physical connection for all local applications<br />

to applications running on other machines on the network. If more than one<br />

application is communicating, the target of the conversations may be different.<br />

Therefore the adapter has to provide links to more than one address. The<br />

protocols of the conversations may also be different. The access or<br />

programming interface for each application to the communications software<br />

could be different. Figure 90 on page 129 represents the layers required to<br />

provide several applications with access to several targets using several<br />

protocols. A server machine has to satisfy more requests for connectivity than a<br />

single requester.<br />

We show the resources needed for a DataHub/2 workstation and server in<br />

scenario 3.<br />

Figure 90 on page 129 shows how an application request from the DataHub/2<br />

workstation flows through the configurable components to a server application.<br />

The return flow is the same. In our case the applications can be LAN requester,<br />

OS/2 <strong>Database</strong> Manager, and DataHub/2. Normally the flow is specific to a<br />

request, but changes to RDS and NETBIOS in recent releases have improved<br />

performance because the protocols allow conversations at lower level interfaces.<br />

The flow is therefore dependent on which level of code you run.<br />

DataHub/2 Workstation<br />

LAN Requester: LAN requester always uses NETBIOS (flow 1). Before the<br />

current version of LAPS, NETBIOS used a SAP, F0 to access the network (flow 3).<br />

The NETBIOS version in LAPS uses a direct interface to 802.2 (flow 2), and so<br />

fewer SAPs need to be defined.<br />

(A SAP is a service access point for an application to gain access to a network.<br />

Each protocol is given a specific SAP access number. If the connection uses a<br />

SAP, the SAP number is appended to the source and destination addresses.<br />

Newer versions of some device drivers bypass the SAP layer.)


Figure 90. LAN Configurable Resources<br />

OS/2 <strong>Database</strong> Manager: The client function of OS/2 <strong>Database</strong> Manager on our<br />

requester uses RDS (flow 6) to access the database on the server. Refer also to<br />

Figure 104 on page 143 and Figure 105 on page 143 for definitions within OS/2<br />

<strong>Database</strong> Manager for the connection configuration.<br />

RDS: RDS can use either NETBIOS or LU 6.2 to connect to the network. If you<br />

select NETBIOS (flow 4), you end up in flow 2 or flow 3, depending on the<br />

NETBIOS version. This uses up another NETBIOS name and another session. It<br />

may require another link. We explain each account in more detail below. If RDS<br />

uses LU 6.2, a private SAP (SAP 84, flow 5) is used up as is another 802.2 user.<br />

DataHub/2: DataHub/2 uses both LU 6.2 (flow 8) for the tools conversations, and<br />

the API to OS/2 <strong>Database</strong> Manager (flow 7) to access the catalogs. Flow 8 uses<br />

the SNA SAP 04. Flow 7 ends up connecting through flow 6 to RDS.<br />

Chapter 4. DataHub/2 Workstation 129


NETBIOS: DataHub/2 flat files and executable code can be accessed by the<br />

DataHub/2 workstation using LAN Requester, which uses NETBIOS protocol.<br />

A link is established when the requester calls the server using the server′s<br />

NETBIOS node name. A session is set up when the requester asks for data. We<br />

have just used up part of three different configurable resources: one NETBIOS<br />

name, one session, and one link station.<br />

802.2: The DataHub/2 connection to the DataHub/2 server is provided by APPC,<br />

which uses LU 6.2 sessions. LU 6.2 is part of SNA, which uses SAP 04. Now we<br />

have used up more resources: a SAP, an 802.2 user, another link station, and<br />

one or more LU 6.2 sessions. The LU 6.2 sessions are configured in OS/2<br />

Communications Manager. NETBIOS is also an 802.2 user and may or may not<br />

use a SAP. The NETBIOS link station is provided by 802.2.<br />

We have now connected all the DataHub/2 workstation applications to the<br />

network.<br />

SAPs, Users, Link Stations, NCBs, Names, and Sessions<br />

When we define resources, we reserve either main or network adapter memory.<br />

The adapter has a fixed amount of memory and therefore the number of<br />

resources you can define is limited. For instance, one adapter can support 254<br />

users or sessions. This is a theoretical limit based on addressability and would<br />

probably never be attained.<br />

LAN Resource Configuration Roadmap<br />

Table 13 shows the LAN resources, where they are stored, and how to configure<br />

or update the parameters.<br />

Table 13. LAN Resources<br />

Location Resource Update from<br />

IBMCOMProtocol.INI<br />

Netbeui_nif<br />

Landd_nif<br />

IBMLANIBMLAN.INI<br />

Line beginning NET1,<br />

last 3 numbers<br />

• 802.2 resources<br />

− SAPs<br />

You can see that depending on how you have configured RDS and which<br />

level of NETBIOS you use, you can run DataHub/2 with from one to three<br />

SAPs defined.<br />

− Users<br />

130 DataHub Implementation and Connectivity<br />

NETBIOS sessions<br />

NCBs, names<br />

802.2 link stations<br />

SAPs, users<br />

Reserved by<br />

requester/server<br />

LAPS or system editor<br />

LAPS or system editor<br />

NCBs, sessions, names System editor only<br />

Each SAP becomes an 802.2 user, as does NETBIOS if it does not use a<br />

SAP, so if we have three SAPs plus NETBIOS, we need four 802.2 users.


− Link stations<br />

We could have several applications connected to either NETBIOS or a<br />

SAP, but each connection from a local application to another remote<br />

application uses a link station. For instance, a LAN server machine,<br />

which also provides 3270 gateway function, needs a link station for each<br />

LAN requester, plus another for each downstream 3270 user. These link<br />

stations within the server would connect to one NETBIOS name for the<br />

server, and one SNA SAP. So 20 workstations accessing two<br />

applications at the server create a need for 40 link stations at the server,<br />

and two SAPs if the server applications use SAPs.<br />

• NETBIOS resources<br />

− Network control blocks (NCBs)<br />

The conversation between applications through the network requires<br />

several stages to initiate, wait for response, keep track of addresses,<br />

pass requests, receive results, check status, and other housekeeping<br />

chores. NCBs within NETBIOS handle these low level jobs. There are<br />

more NCBs than there are other NETBIOS resources. OS/2 <strong>Database</strong><br />

Manager reserves a high number of NCBs, as does the LAN requester or<br />

server. Refer to Table 15 on page 132 and Figure 102 on page 142.<br />

These reservations plus any additional requirements for other local<br />

applications using NETBIOS must be put in the PROTOCOL.INI NCB.<br />

− Names<br />

Each reference by an application through NETBIOS to another application<br />

requires a NETBIOS name. The local LAN requester needs to withdraw<br />

one name from the NETBIOS Names account to connect to the LAN<br />

server and another name to register the local NETBIOS name. Similarly<br />

if RDS uses NETBIOS to reach out and touch the database server, it uses<br />

up another name for the remote node name. In our example above with<br />

20 requesters, the server must be able to allocate 20 NETBIOS names to<br />

talk to 20 requesters if they are all active.<br />

− Sessions<br />

The logical link between applications is handled by a session. Several<br />

sessions can share a link station if the applications on one end of the<br />

link communicate with other applications at the same destination.<br />

DataHub/2 Server<br />

All connections from the network to the server applications are the same as the<br />

DataHub/2 workstation. We used different levels of OS/2 and OS/2 <strong>Database</strong><br />

Manager on our requester and server. This means that the levels of software<br />

and device drivers do not have to be the same across your environment.<br />

OS/2 <strong>Database</strong> Manager Reserved Resources: The server needs more<br />

resources to connect to several requesters than a requester needs to connect to<br />

one server. Some of these resources are user defined and some are fixed by<br />

the application. Refer to Table 14 on page 132 for minimum configuration<br />

values for sessions.<br />

Chapter 4. DataHub/2 Workstation 131


Table 14. NETBIOS Session Information for DB2/2<br />

DB2/2<br />

Workstation<br />

Type<br />

SQLNETB<br />

Default<br />

DB2/2 Overhead<br />

Maximum<br />

Number of<br />

Concurrent<br />

Sessions Using<br />

Default<br />

SQLNETB<br />

Server 64 5 59<br />

Client 16 3 13<br />

The number of sessions reserved for OS/2 <strong>Database</strong> Manager is defined in<br />

environment variable SQLNETB. You can change this value in the CONFIG.SYS<br />

file if your requirements are different from the defaults. Table 15 shows the<br />

NCBs and NETBIOS names reserved by DB2/2. The NETBIOS NCBs and names<br />

values cannot be changed. You must configure sufficient resources in LAPS to<br />

satisfy both OS/2 <strong>Database</strong> Manager and LAN server or requester demands.<br />

Table 15. DB2/2 NETBIOS Resource Requirements<br />

DB2/2 Workstation<br />

Type<br />

132 DataHub Implementation and Connectivity<br />

Commands (NCBs) Names<br />

Server 46 4<br />

Client 24 2<br />

LAN Requester and Server Reserved Resources: Normally the LAN requester or<br />

server program is invoked during startup of a machine. The configuration of<br />

requester or server functions is specified in the IBMLAN.INI file. The line<br />

beginning Net1 specifies how many LAN resources to reserve. Refer to<br />

Figure 102 on page 142. Reservation is made for NCBs, sessions, and NETBIOS<br />

names. If the number of parameters defined in LAPS for these resources is only<br />

enough for requester or server function, subsequent requests by applications for<br />

additional resources or RDS links will fail.<br />

LAPS Configuration Samples<br />

Figure 91 on page 133 through Figure 102 on page 142 show the screens you<br />

use to configure the stations, sessions, names, SAPs, commands, and users that<br />

the network needs to provide connections. We show the details of the<br />

configuration for both a DataHub/2 workstation and a DataHub/2 server. Our<br />

server is configured for seven DataHub/2 workstations. LAPS installs itself in the<br />

IBMCOM directory. Figure 91 on page 133 and Figure 92 on page 133 get you<br />

started.


Figure 91. LAPS: Select an Option<br />

Click on Configure to see the next screen.<br />

Figure 92. LAPS: Migrate or Configure<br />

Click on Configure LAN transports and the Continue push button to see the next<br />

screen.<br />

Chapter 4. DataHub/2 Workstation 133


Figure 93. LAPS: Configure Workstation<br />

134 DataHub Implementation and Connectivity<br />

Figure 93 shows adapter and protocol selection menus. Scroll down under<br />

Network Adapters until you see the adapter installed in the machine you want to<br />

configure. Note that you can configure LAPS on a machine and use the<br />

configuration on another machine. Click on the adapter and the Add push button<br />

in the Network Adapters dialog box. Then click on the IBM Token-Ring Network<br />

Adapters and select IBM IEEE 802.2. Click on IBM IEEE 802.2 and the Add push<br />

button in the Protocols dialog box. Now you are ready to edit the parameters for<br />

802.2. To edit the parameters for NETBIOS, repeat the process you used for<br />

802.2.


Figure 94 shows the 802.2 protocol parameter edit screen for the DataHub/2<br />

workstation. We highlight the stations, SAPs, and user fields. You can use the<br />

defaults for a requester. You use up memory if you increase any of the<br />

parameters, so you should not use more resources than necessary.<br />

Figure 94. LAPS: Parameters for IBM IEEE 802.2, DataHub/2 Workstation<br />

The IBM marketing tools repository has an OS/2 tuning guide that is available to<br />

you from your marketing representative or systems engineer. The guide is<br />

called Parmtune and discusses basic and advanced elements of OS/2, OS/2<br />

<strong>Database</strong> Manager, OS/2 Communications Manager, and LAN server.<br />

The parameters for maximum link stations and maximum number of users for<br />

the DataHub/2 workstation are the defaults from installation. We reduced the<br />

number of SAPs to show how NETBIOS and RDS do not use a SAP under LAPS.<br />

Our DataHub/2 workstation has access to an H: disk on the LAN server, two 3270<br />

host sessions, a command line interface to the server′s sample database, and<br />

DataHub/2 running. The single SAP is used only for 3270.<br />

Chapter 4. DataHub/2 Workstation 135


136 DataHub Implementation and Connectivity<br />

For the LAN server machine we increased the link stations, SAPs, and users to<br />

allow the server to provide connections from five DataHub/2 workstations to the<br />

server′s processes, database, and data (see Figure 95). The LAN server<br />

program does not provide the connection from the requester to the database on<br />

the server, but the LAPS configuration supplies resources for RDS.<br />

Figure 95. LAPS: Parameters for 802.2, LAN Server


In the Configure Workstation window (Figure 96) click on NETBIOS and the Edit<br />

push button to display the Parameters for IBM OS/2 NETBIOS.<br />

Figure 96. LAPS: Configure Workstation, DataHub/2 Workstation NETBIOS<br />

Chapter 4. DataHub/2 Workstation 137


138 DataHub Implementation and Connectivity<br />

Figure 97 and Figure 98 on page 139 show similar screens for the NETBIOS<br />

component. We highlight the parameters we are interested in: sessions,<br />

commands, and names. Notice the differences between the requester and<br />

server parameters.<br />

Figure 97. LAPS: Parameters for NETBIOS, DataHub/2 Workstation


Figure 98. LAPS: Parameters for NETBIOS, DataHub/2 Server<br />

In the Exiting LAPS window, click on the Exit push button to exit the LAPS<br />

configuration process.<br />

Figure 99. LAPS: Exit<br />

Figure 100 on page 140 shows the results of configuring LAPS. The<br />

PROTOCOL.INI file in the IBMCOM directory contains the LAPS parameters. You<br />

could edit them directly here rather than using the LAPS configuration. Changes<br />

do not take effect until you stop and restart the local server or requester. Notice<br />

the differences between the requester and server parameters.<br />

Chapter 4. DataHub/2 Workstation 139


PROT_MAN<br />

DRIVERNAME = PROTMAN$<br />

IBMLXCFG<br />

IBMTOK_nif = IBMTOK.nif<br />

LANDD_nif = LANDD.NIF<br />

NETBEUI_nif = NETBEUI.NIF<br />

LANDD_nif<br />

DriverName = LANDD$<br />

Bindings = IBMTOK_nif<br />

NETADDRESS = ″T400032047101″<br />

ETHERAND_TYPE = ″I″<br />

SYSTEM_KEY = 0x0<br />

OPEN_OPTIONS = 0x2000<br />

TRACE = 0x0<br />

LINKS = 41<br />

MAX_SAPS = 1 ***** See text *****<br />

MAX_G_SAPS = 0<br />

USERS = 4<br />

TI_TICK_G1 = 255<br />

T1_TICK_G1 = 15<br />

T2_TICK_G1 = 3<br />

TI_TICK_G2 = 255<br />

T1_TICK_G2 = 25<br />

T2_TICK_G2 = 10<br />

IPACKETS = 250<br />

UIPACKETS = 100<br />

MAXTRANSMITS = 6<br />

MINTRANSMITS = 2<br />

TCBS = 64<br />

GDTS = 30<br />

ELEMENTS = 800<br />

NETBEUI_nif<br />

DriverName = netbeui$<br />

Bindings = IBMTOK_nif<br />

ETHERAND_TYPE = ″I″<br />

USEADDRREV = ″YES″<br />

OS2TRACEMASK = 0x0<br />

SESSIONS = 41<br />

NCBS = 55 ***** See text *****<br />

NAMES = 20<br />

SELECTORS = 5<br />

The remainder of the file is not shown here.<br />

Figure 100. LAPS: PROTOCOL.INI, DataHub/2 Workstation<br />

140 DataHub Implementation and Connectivity


PROT_MAN<br />

DRIVERNAME = PROTMAN$<br />

IBMLXCFG<br />

IBMTOK_nif = IBMTOK.nif<br />

LANDD_nif = LANDD.NIF<br />

NETBEUI_nif = NETBEUI.NIF<br />

LANDD_nif<br />

DriverName = LANDD$<br />

Bindings = IBMTOK_nif<br />

ETHERAND_TYPE = ″I″<br />

SYSTEM_KEY = 0x0<br />

OPEN_OPTIONS = 0x2000<br />

TRACE = 0x0<br />

LINKS = 50<br />

MAX_SAPS = 6 ***** See text *****<br />

MAX_G_SAPS = 0<br />

USERS = 6<br />

TI_TICK_G1 = 255<br />

T1_TICK_G1 = 15<br />

T2_TICK_G1 = 3<br />

TI_TICK_G2 = 255<br />

T1_TICK_G2 = 25<br />

T2_TICK_G2 = 10<br />

IPACKETS = 250<br />

UIPACKETS = 100<br />

MAXTRANSMITS = 6<br />

MINTRANSMITS = 2<br />

TCBS = 64<br />

GDTS = 30<br />

ELEMENTS = 800<br />

NETBEUI_nif<br />

DriverName = netbeui$<br />

Bindings = IBMTOK_nif<br />

ETHERAND_TYPE = ″I″<br />

USEADDRREV = ″YES″<br />

OS2TRACEMASK = 0x0<br />

SESSIONS = 60<br />

NCBS = 95 ***** See text *****<br />

NAMES = 40<br />

SELECTORS = 5<br />

The remainder of the file is not shown here.<br />

Figure 101. LAPS: PROTOCOL.INI, LAN Server<br />

Figure 102 on page 142 shows the IBMLAN.INI file. We are interested in the<br />

three numbers at the end of the Net1 line. This is where the requester and<br />

server reserve connectivity resources defined in the 802.2 and NETBIOS<br />

Chapter 4. DataHub/2 Workstation 141


configurations. Notice the difference in these numbers between the OS/2 LAN<br />

requester in Figure 102 on page 142 and the OS/2 LAN server in Figure 103 on<br />

page 142. Spreadsheets are available on IBM marketing tools to assist you in<br />

tuning these parameters. CNFGLS13 will tune servers up to version 3.0;<br />

CNFGLS30 applies to version 3.0 specifically. These tools can be given to<br />

customers.<br />

; OS/2 LAN Requester initialization file<br />

networks<br />

net1 = NETBEUI$,0,LM10,32,20,14 ***** See text *****<br />

; This information is read by the redirector at device initialization time.<br />

requester<br />

COMPUTERNAME = REQGORD<br />

DOMAIN = SIGD2471<br />

; The following parameters generally do not need to be<br />

; changed by the user.<br />

charcount = 16<br />

chartime = 250<br />

charwait = 3600<br />

keepconn = 600<br />

keepsearch = 600<br />

maxcmds = 16<br />

The remainder of the file is not shown here.<br />

Figure 102. LAPS: IBMLAN.INI, DataHub/2 Workstation<br />

; OS/2 LAN Server initialization file<br />

networks<br />

net1 = NETBEUI$,0,LM10,32,40,21 ***** See text *****<br />

; This information is read by the redirector at device initialization time.<br />

requester<br />

142 DataHub Implementation and Connectivity<br />

COMPUTERNAME = SIGS2471<br />

DOMAIN = SIGD2471<br />

; The following parameters generally do not need to be<br />

; changed by the user.<br />

charcount = 16<br />

chartime = 250<br />

charwait = 3600<br />

keepconn = 600<br />

keepsearch = 600<br />

maxcmds = 16<br />

The remainder of the file is not shown here.<br />

Figure 103. LAPS 9A: IBMLAN.INI, LAN Server


4.4.6 OS/2 <strong>Database</strong> Manager Connectivity Parameters<br />

Just as the LAN Server is configured to support a number of users, the OS/2<br />

<strong>Database</strong> Manager also has parameters you may have to change to satisfy your<br />

requirements. The command to display the configuration is:<br />

• C: DBM GET DATABASE MANAGER CONFIGURATION<br />

Figure 104 and Figure 105 show the results of the command for both an OS/2<br />

<strong>Database</strong> Manager client and our server. You can change these parameters and<br />

the parameters in Figure 106 on page 144 by using the configuration tool of<br />

OS/2 <strong>Database</strong> Manager.<br />

<strong>Database</strong> Manager Configuration<br />

Max requester I/O block size (RQRIOBLK) = 4096<br />

Max server I/O block size (SVRIOBLK) = 0<br />

Max kernel segments (SQLENSEG) = 25<br />

Max no. of active databases (NUMDB) = 3<br />

Workstation name (NNAME) = DHCLIENT<br />

Communication heap size (COMHEAPSZ) = 4<br />

Remote services heap size (RSHEAPSZ) = 2<br />

Figure 104. OS/2 <strong>Database</strong> Manager Connectivity Parameters: Client<br />

<strong>Database</strong> Manager Configuration<br />

Max requester I/O block size (RQRIOBLK) = 4096<br />

Max server I/O block size (SVRIOBLK) = 4096<br />

Max kernel segments (SQLENSEG) = 802<br />

Max no. of active databases (NUMDB) = 8<br />

Workstation name (NNAME) = DHSERVER<br />

Communication heap size (COMHEAPSZ) = 16<br />

Remote services heap size (RSHEAPSZ) = 14<br />

Figure 105. OS/2 <strong>Database</strong> Manager Connectivity Parameters: Server<br />

Comheapsz and Rsheapsz define the amount of memory to allocate for<br />

communicating between RDS requesters and the server code. More requesters<br />

require more communications buffers to be defined at the server. In our<br />

implementation we have a connection from each DataHub/2 workstation. The<br />

DataHub/2 program runs as two processes, each of which requires access to the<br />

DataHub/2 database. Therefore 14 remote connections will service 7 DataHub/2<br />

requesters. We increased the Comheapsz on the requester from the default of 2<br />

to 4 to enable full DataHub/2 function from a client.<br />

Chapter 4. DataHub/2 Workstation 143


4.4.7 DataHub/2 <strong>Database</strong> Connectivity Parameters<br />

4.5 LAN Design Considerations<br />

4.5.1 LAN Server Compatibility<br />

144 DataHub Implementation and Connectivity<br />

Now that the OS/2 <strong>Database</strong> Manager can support our connections, we need<br />

access to the DataHub/2 database. See Figure 106.<br />

Figure 106. DataHub/2 <strong>Database</strong> Active Applications: Server<br />

You must update the maximum active applications database parameter. As in<br />

Rsheapsz, the DataHub/2 application runs as two processes, so the maxappls<br />

parameter must reflect the number of DataHub/2 workstations multiplied by 2.<br />

LAN server compatibility and placement of the LAN server and DataHub/2<br />

database are the main design considerations.<br />

Table 16 shows the base operating systems and the LAN Server version that will<br />

run. We mixed DataHub/2 on different levels of OS/2 and OS/2 <strong>Database</strong><br />

Managers with complete interoperability. You should ensure that your server is<br />

compatible with the operating system level on the database server machine.<br />

Table 16. LAN Server and OS/2 Version Compatibility<br />

OS/2 V 1.3 OS/2 V 2.0<br />

LS 2.0 Entry Yes Yes<br />

LS 2.0 Advanced Yes No<br />

LS 3.0 Entry No Yes<br />

LS 3.0 Advanced No Yes


4.5.2 Placement of LAN Server and DataHub/2 <strong>Database</strong><br />

4.6 Performance<br />

4.6.1 Placement<br />

The LAN server program must be installed on the same machine as the<br />

DataHub/2 server code because of the way DataHub/2 makes its internal files<br />

available to the DataHub/2 requester. The DataHub/2 requester must be a LAN<br />

requester so that the DataHub/2 code can access the files.<br />

Each requester can run with a local DataHub/2 designated database or use a<br />

central database on the DataHub/2 server. The central database could even be<br />

on a different database server not running DataHub/2. This setup would require<br />

RDS and network activity for every database access, however, and is not<br />

generally recommended.<br />

Here we do not discuss how fast DataHub runs. Rather, we discuss two<br />

performance factors that can influence your design. One such factor is how you<br />

place and combine the various products on the network; another is the workload<br />

that each product load places on a machine.<br />

You are probably installing DataHub on an existing LAN or network. The<br />

topology of the network is probably the result of satisfying user requirements<br />

previously defined, such as access to applications and data.<br />

In a local data environment, performance is not an issue because you are<br />

accessing and managing local data. Performance is determined by the amount<br />

of data and the speed of the processor and disk drive.<br />

In a distributed environment, such as in our scenarios 2 and 3, your performance<br />

will be affected by the network topology. Depending on the additional load<br />

imposed by DataHub, you may need to select components to fit the topology or<br />

change the topology to fit your requirements.<br />

Let′s look at some of factors that affect capacity and performance in a scenario<br />

that is more complex than ours.<br />

Your users are requesters of a LAN server machine, which also provides host<br />

connect gateway function. One OS/2 database resides on the gateway. Another<br />

resides on a remote non-OS/2 host and is accessed by DDCS/2. Keep in mind<br />

that the user′s observed response time includes the performance of all system<br />

and network components from the user′s keyboard through the local processor,<br />

LAN, gateway, wide area network (WAN), host processor, and the return trip.<br />

Performance of this network is determined by:<br />

• Number of files or programs transferred between the server and requesters<br />

• How much data is transferred between the local network and the host<br />

• Line speed of the host connection<br />

• Number of database accesses at the server.<br />

Recent experience with customers in similar situations shows that often the<br />

performance bottleneck of a network as perceived by the user is not at the<br />

server but at the workstation (requester). The network may be a performance<br />

Chapter 4. DataHub/2 Workstation 145


4.6.2 Workload<br />

consideration if the local LAN is connected to the WAN by a low-speed line. The<br />

LAN is almost never a performance problem if properly designed.<br />

Several benchmarks have been run over the last few years for all components of<br />

OS/2. But benchmarks have not been run for every combination of LAN<br />

requester, OS/2 Communications Manager, OS/2 <strong>Database</strong> Manager, and<br />

DDCS/2. Now we are adding a DataHub/2 component to the available mix. So<br />

your mileage will vary. The performance figures in this section are offered as a<br />

guideline only.<br />

The performance of any function varies drastically depending on the machine<br />

model, amount of memory installed, type of disk, network adapter, line speed,<br />

and user workload. You should use the following numbers to determine whether<br />

your scenario is a light, reasonable, or impossible load on the configuration:<br />

• Local processor<br />

Most benchmarks run on OS/2 show the performance knee at around 60%<br />

processor utilization. This means that a CPU running xx functions per<br />

minute at 50% will not run twice as many with the same response time. The<br />

effect of mixing functions like communications and database calls cannot be<br />

predicted by adding the loads linearly.<br />

• OS/2 Communications Manager<br />

Benchmarks run at the Raleigh ITSC indicate that a high-end gateway<br />

machine can serve up to 50 typical users of 3270 sessions on a 56KB line<br />

with reasonable (1 to 2 sec) response time. Line speed becomes the limit,<br />

and the processor utilization stays linear, below 50%.<br />

• LAN Server<br />

Server performance is measured by the number of 512-byte sectors the<br />

server can access from disk and forward to a requester per minute. The unit<br />

is a request per minute, or RPM. If the user is downloading a DataHub/2<br />

program module, divide the size of the program by 512 to determine the<br />

number of requests. Determine the number of users and frequency of<br />

access and do your arithmetic to come up with the server load. A general<br />

rule of thumb for an average-sized server running at a high processor<br />

utilization is 15,000 to 20,000 RPMs. HPFS and Advanced Server versions<br />

will exceed this figure. If your peak load is 5,000 requests, you have used up<br />

less than 25% of your server.<br />

• OS/2 <strong>Database</strong> Manager<br />

146 DataHub Implementation and Connectivity<br />

The START USING DATABASE command takes a certain amount of time<br />

depending on where the database is, how busy the database server is,<br />

whether the user is logged on, and whether the database manager has been<br />

started. The START USING DATABASE command is usually issued only<br />

during initialization of the application. If your application starts and stops<br />

databases, count 15 sec for local databases. Remote starts can take from a<br />

few seconds to minutes depending on whether sessions already exist<br />

between the requester and the remote database and how many nodes<br />

manage the session. Depending on buffer sizes, physical location of data,<br />

and type of SQL call, performance can vary from 1,000 to 50,000 SQL calls<br />

completed per minute. Your application′s performance is very dependent on<br />

the database design, which determines the number of calls. In one customer<br />

application the response time was reduced from 18 to 2 seconds by


duplicating a few fields from one table to another instead of accessing two<br />

tables.<br />

− RDS versus ARI<br />

An OS/2 <strong>Database</strong> Manager request using RDS causes all the selected<br />

data to be sent from the OS/2 <strong>Database</strong> Manager server to the RDS<br />

requester through the LAN. If the requester executes another select<br />

statement, it is sent to the server and the results returned. The<br />

application remote interface (ARI) is more efficient than RDS because<br />

you can send one request (to run a series of statements) to the server,<br />

and only the result of all the statements is returned to the calling<br />

application. This process reduces the processor load at the requester<br />

but transfers it to the server, which is presumably a faster machine.<br />

• OS/2 <strong>Database</strong> Manager and LAN server tuning conflicts<br />

Disk caching to memory is normally a good idea with LAN server. But OS/2<br />

<strong>Database</strong> Manager defeats the benefits of disk caching because the<br />

database buffer pools accomplish a similar purpose for database data.<br />

When an OS/2 <strong>Database</strong> Manager disk read is necessary because the data<br />

is not in the buffer pool, a disk cache would be flushed of data accessible to<br />

the LAN server. The data would then be in the cache and the buffer pool,<br />

but LAN server would have no need for the cached data because it is<br />

database data. System Performance Monitor/2 (SPM/2) can be used to<br />

determine the application working set of memory. You can then balance<br />

buffer pools, swapping and possibly set up a Vdisk for any repetitive-read<br />

LAN server data if there is sufficient real memory.<br />

• DDCS/2<br />

The performance of DDCS/2 is dependent on several considerations such as<br />

row size, block fetch, block size, and network configuration (requesters to<br />

gateway through RDS using NETBIOS or APPC). Generally, performance in a<br />

realistic environment varies between 2,000 and 10,000 rows per minute<br />

returned from the DB2 or SQL/DS host.<br />

The above numbers give you a rough starting point from which to determine the<br />

load your application imposes on a server. The art of predicting performance<br />

depends on how well you can analyze the exact functions called for by your<br />

application mix. SPM/2 is a tool you can use to provide information on disk and<br />

memory utilization plus the load by application. You can then use the<br />

information to determine which factors to analyze and tune for better<br />

performance.<br />

Chapter 4. DataHub/2 Workstation 147


4.7 Codepage <strong>Support</strong><br />

4.8 Recommendations<br />

An OS/2 database is stored in whatever codepage was in effect when the<br />

database was created. Access through RDS to a remote database with a<br />

different codepage does not work. This is an implementation restriction in the<br />

RDS architecture. You can, however, start another OS/2 process, change the<br />

codepage using the CHCP nnn command (where nnn is the new codepage), and<br />

issue a DBM START USING DATABASE against the database. One of our<br />

implementations had a local DataHub/2 database with codepage 437, and<br />

another OS/2 DataHub/2 instance running against a remote DataHub/2 database<br />

using codepage 850. Note that the two processes must use different EMQDIR<br />

system variables to point to the correct DataHub/2 data file location. We use<br />

local data files for each instance. Figure 107 shows the OS/2 command file for<br />

codepage change.<br />

CHCP 850<br />

SET EMQDIR=C:EMQ850<br />

DATAHUB.EXE<br />

EXIT<br />

Figure 107. CMD File for Codepage Change<br />

The challenge of installing DataHub/2 is in getting the prerequisite platform<br />

running. In our examples, each scenario builds on the success of its<br />

predecessor.<br />

4.8.1 Create the Administrator′s Folder<br />

148 DataHub Implementation and Connectivity<br />

Figure 108 shows an administrator′s OS/2 folder we used to run DataHub/2. It<br />

provides easy access to the logs and error reports your administrator uses to<br />

manage the DataHub/2 environment. Some of the icons are used only for initial<br />

installation and configuration, such as LAPS. Table 17 on page 149 lists the<br />

definitions we used to set up the Administrator′s folder.<br />

Figure 108. A DataHub/2 OS/2 Desktop: Administrator′s Folder


Table 17. DataHub/2 Administrator′s Folder Definitions<br />

SNA network<br />

definition<br />

LAPS<br />

Directory tool<br />

User profile<br />

management<br />

IBM DataHub/2<br />

Path and File<br />

Name<br />

C:CMLIBAPPN<br />

APPNC2.EXE<br />

C:IBMCOM<br />

LAPS.EXE<br />

C:SQLLIB<br />

DIRECT.EXE<br />

C:MUGLIB<br />

UPMACCTS.EXE<br />

D:EMQDIR<br />

DATAHUB.EXE<br />

Some of the entries in Table 17 are specific to our environment. You should<br />

change the drive and path data to match your environment.<br />

• Verify each component as you install it.<br />

You may want to test OS/2 <strong>Database</strong> Manager connectivity using RDS before<br />

invoking DataHub/2.<br />

• See if you can START USING the DataHub/2 database at the DataHub/2<br />

database server.<br />

If you have a LAN Server, make sure the requesters have access to the files.<br />

If one or a small number of users functions through the server but an<br />

additional user cannot, review the LAN configurable parameters.<br />

• Start with a single DataHub/2 workstation.<br />

Parameters<br />

[Enter Node Definition<br />

(.NDF) file Name]<br />

EMQCELOG.010 EPM.EXE D:EMQDIREMQCELOG.010<br />

EMQRREPT.TXT EPM.EXE D:EMQDIREMQRREPT.TXT<br />

OS/2 full screen *<br />

DBM command<br />

line<br />

C:OS2CMD.EXE<br />

/K DBM.CMD -S &<br />

STARTDBM.EXE<br />

The experience you gain can be transferred to the more detailed<br />

configuration required by a central database and files.<br />

Working<br />

Directory<br />

C:CMLIB<br />

C:IBMCOM<br />

C:SQLLIB<br />

D:EMQDIR<br />

C:SQLLIB<br />

Chapter 4. DataHub/2 Workstation 149


150 DataHub Implementation and Connectivity


Chapter 5. DataHub Tools Conversation Connectivity<br />

In this chapter we discuss the implementation of tools conversation data flows<br />

that DataHub uses between DataHub/2 and DataHub <strong>Support</strong> host components.<br />

Figure 109 summarizes the different data flows that DataHub uses.<br />

The chapter describes the DataHub key connectivity parameters and emphasizes<br />

how they should relate to and match each other. The chapter explains the<br />

definitions that must be made at the DataHub/2 workstation to add a managed<br />

host and a managed RDB in the DataHub/2 database, as well as the definitions<br />

that must be made in the OS/2 Communications Manager and the OS/2<br />

<strong>Database</strong> Manager. The chapter ends with recommendations for a successful<br />

DataHub installation.<br />

Figure 109. DataHub Data Flows Used by DataHub Tool Functions<br />

© Copyright IBM Corp. 1993 151


5.1 Key Connectivity Parameters<br />

The DataHub/2 workstation manages different and multiple RDBs. To have the<br />

DataHub system working properly, it is very important to understand which<br />

parameters are key to connectivity.<br />

OS/2 Communications Manager, VTAM/NCP, and AS/400 communications<br />

definitions must have some matching parameters to ensure the successful<br />

operation of DataHub. The DataHub key connectivity parameters are:<br />

• Symbolic destination name<br />

• Partner logical unit name<br />

• Partner transaction program name<br />

• Mode name<br />

• Host RDB name<br />

• OS/2 <strong>Database</strong> Manager database alias name<br />

• Subsystem name, if the RDB is a DB2.<br />

Table 18 shows the key connectivity parameters that must be defined to add an<br />

RDB in the DataHub/2 workstation configuration and where to find the related<br />

parameter for each host, if applicable.<br />

Table 18. DataHub/2: Key Connectivity Parameters<br />

Parameter DataHub/2<br />

Workstation<br />

Symbolic<br />

destination name<br />

Partner LU<br />

name<br />

CPI<br />

communications<br />

side info<br />

MVS Host VM Host AS/400 Host OS/2 Host<br />

n/a n/a n/a n/a<br />

n/a VTAM APPL VTAM APPL LOCNAME Local node<br />

name<br />

Partner TP name TP name n/a RESID AS/400 program<br />

name<br />

Transaction<br />

program<br />

Host RDB name RDB Location DBNAME *LOCAL RDB Map file<br />

MODE name Transmission<br />

service mode<br />

OS/2 database DBM System<br />

Directory alias<br />

MODEENT MODEENT MODE<br />

description<br />

Transmission<br />

service mode<br />

n/a n/a n/a Map file<br />

Symbolic Destination Name<br />

A symbolic destination name is a name or an alias name given to an OS/2 SNA<br />

resource that defines the characteristics of an LU 6.2 conversation between an<br />

OS/2 system and another host, which can be another OS/2 system. In a<br />

DataHub environment, the symbolic destination name represents a DataHub<br />

<strong>Support</strong> component to which DataHub/2 will send requests using the tools<br />

conversation protocol. The symbolic destination name includes definition of the:<br />

• Partner logical unit name (DataHub <strong>Support</strong> component)<br />

• Partner transaction program name<br />

• Mode name<br />

• Conversation security type.<br />

152 DataHub Implementation and Connectivity


The symbolic destination name is defined in OS/2 Communications Manager. It<br />

is used by DataHub/2 to associate a specific host with a DataHub <strong>Support</strong><br />

application that will handle the tools conversation on that host.<br />

This parameter is configured to DataHub/2 in the Add Host dialog, in the<br />

Symbolic Destination field.<br />

Partner Logical Unit Name<br />

In addition to the DRDA connectivity requirements, DataHub sessions are<br />

established between the DataHub/2 workstation and the DataHub <strong>Support</strong><br />

components on the different managed hosts. Those sessions are what we call<br />

the tools conversation and are LU 6.2 sessions (APPC). Therefore a logical unit<br />

name must be assigned to the DataHub <strong>Support</strong> components. For the MVS and<br />

VM hosts, DataHub <strong>Support</strong>/MVS and DataHub <strong>Support</strong>/VM will need to be<br />

added to the VTAM definitions.<br />

For the AS/400 and OS/2 environments, you do not have applications to define.<br />

You only need to connect to the system using the proper link definitions as<br />

defined in 3.6, “AS/400 Definitions” on page 60 and “Partner Token-Ring<br />

Address and Logical Unit Name” on page 81. For those environments, the<br />

transaction program name that will handle that request on the managed host<br />

distinguishes a DRDA or an RDS request from a tools conversation request.<br />

Partner Transaction Program Name<br />

The transaction program name is the name of a program or an alias to a<br />

program name that, in the case of a DataHub <strong>Support</strong> environment, processes<br />

the tools conversation requests at the managed host location. The<br />

implementation of the transaction program name varies for each platform.<br />

Mode Name<br />

The mode name is composed of multiple parameters that are used during the<br />

session negotiation between DataHub/2 and the DataHub <strong>Support</strong> components.<br />

The mode name must be defined at each location. The name and the class of<br />

service must match.<br />

Host RDB Name<br />

The SAA RDB name is used to identify a host database at the DataHub/2<br />

workstation. This is an important parameter, because for each host you can<br />

associate one or more RDB names. MVS and VM allow you to have multiple<br />

DB2 subsystems and SQL/DS database machines, respectively.<br />

Although OS/2 hosts cannot act as servers in a DRDA network, which means<br />

they do not have a DRDA RDB name associated with them, DataHub/2 requires<br />

such a definition. In DataHub/2 you associate one RDB name with each OS/2<br />

<strong>Database</strong> Manager database you want to manage.<br />

OS/2 <strong>Database</strong> Manager <strong>Database</strong> Alias Name<br />

The OS/2 <strong>Database</strong> Manager database alias name is not a key connectivity<br />

parameter in terms of customizing the DataHub <strong>Support</strong> components. But the<br />

value is key at both the DataHub/2 workstation and the OS/2 managed hosts.<br />

This value is used for any DataHub/2 function using the DRDA or RDS protocol to<br />

execute. The name defined in DataHub/2 is used to connect to OS/2 <strong>Database</strong><br />

Manager, which in turn will connect to the managed host.<br />

Chapter 5. DataHub Tools Conversation Connectivity 153


5.2 The ITSC Network<br />

DB2 Subsystem Name<br />

This parameter is key only when the RDB is a DB2 system. It is used in the<br />

tools conversation protocol.<br />

The DRDA ITSC network we want to manage from a DataHub/2 workstation is<br />

described in 3.3, “The ITSC Network” on page 38. Because DRDA and RDS<br />

connectivity are the basis for DataHub implementation, you should refer to the<br />

different definitions we use for DRDA connectivity. Many of those definitions are<br />

used to customize the different components of the DataHub/2 workstation.<br />

As explained in 5.1, “Key Connectivity Parameters” on page 152, each managed<br />

host must be identified by some names. Some must be unique, like the RDB<br />

name and logical unit name. Some are specific to the environment setup, like<br />

the TP name and the mode name.<br />

Table 19 and Table 20 show you the values assigned to each ITSC DataHub/2<br />

managed host.<br />

Table 19. ITSC Network: DataHub Key Connectivity Parameter Values. MVS, VM, and AS/400 DataHub<br />

Managed Hosts<br />

Parameter New York<br />

DB2<br />

Headquarters<br />

Toronto<br />

SQL/DS<br />

Large Car Dealer 1<br />

Symbolic dest. name SMNYCENT SVTORDR1 SASJDLR1<br />

San Jose<br />

AS/400<br />

Car Dealer 1<br />

Partner logical unit name USIBMSC.EMQMACB1 USIBMSC.VDHTOR01 USIBMSC.SJA2054I<br />

Partner TP name EMQVRSID QDMU/QSTSC<br />

Mode name EMQMLOGM IBMRDB IBMRDB<br />

Host RDB name DB2CENTDIST SQLLDLR01 SJ400DLR1<br />

Host DBM database alias<br />

name<br />

MNYCENT VTORDB ASJDB<br />

Note: The AS/400 program name will always be QDMU/QSTSC.<br />

Table 20. ITSC Network: DataHub Key Connectivity Parameter Values. OS/2 DataHub Managed Hosts<br />

Parameter Montreal<br />

OS/2 DBM<br />

Car Dealer 1<br />

Paris<br />

OS/2 DBM<br />

Car Dealer 1<br />

Symbolic dest. name SMTLDLR1 SPARDLR1 SRIODLR1<br />

Rio De Janeiro<br />

DB2/2<br />

Car Dealer 1<br />

Partner logical unit name USIBMSC.SJA2018I USIBMSC.SJA2050I USIBMSC.SJA2019I<br />

Partner TP name EMQ2T EMQ2T EMQ2T<br />

Mode name IBMRDB IBMRDB IBMRDB<br />

Host RDB name DBMMTLDLR1 DBMPARDLR1 DBMRIODLR1<br />

Host DBM database alias<br />

name<br />

154 DataHub Implementation and Connectivity<br />

OMTLDB OMTLDB ORIODB


5.3 Configuring the DataHub/2 Workstation<br />

The DataHub/2 workstation as described in Chapter 4, “DataHub/2 Workstation”<br />

on page 105 needs definitions in:<br />

• DataHub/2 database<br />

• OS/2 Communications Manager<br />

• OS/2 <strong>Database</strong> Manager directories<br />

• OS/2 User Profile Management.<br />

5.3.1 Component Relationships<br />

The following DataHub key connectivity parameters are unique for the DataHub/2<br />

workstation:<br />

• Symbolic destination name. The partner LU name and the partner TP name<br />

are derived from this parameter to enable the communication between the<br />

DataHub/2 workstation and the different DataHub <strong>Support</strong> partners.<br />

• The OS/2 <strong>Database</strong> Manager database alias name. This parameter, which is<br />

assigned to an RDB name, is the key parameter to connect DataHub/2 to<br />

OS/2 <strong>Database</strong> Manager, which in turn will establish connection between the<br />

DataHub/2 application and the source or target databases.<br />

Figure 110 on page 156 shows the relationship among the three main<br />

components used in a DataHub/2 environment. The security aspect of<br />

DataHub/2 is discussed in 7.1, “Security on the DataHub/2 Workstation” on<br />

page 241.<br />

When the request requires the use of DRDA or RDS, DataHub/2 will connect to<br />

the OS/2 <strong>Database</strong> Manager database alias name entered in the RDB<br />

configuration. The OS/2 <strong>Database</strong> Manager processes the request using the<br />

different directory entries and OS/2 Communications Manager definitions as<br />

shown in Figure 46 on page 71.<br />

If, however, the request requires the use of the tools conversation protocol, the<br />

symbolic destination name defined in either the host configuration or the RDB<br />

configuration will be used. DataHub/2 asks OS/2 Communications Manager to<br />

establish a conversation with a DataHub <strong>Support</strong> partner using this name. OS/2<br />

Communications Manager then uses its own definitions to establish the session.<br />

These definitions include:<br />

• Partner logical unit name<br />

• Partner transaction program name<br />

• Mode name<br />

• Conversation security type.<br />

Once OS/2 Communications Manager establishes the session, the DataHub/2<br />

request is sent to the DataHub <strong>Support</strong> partner to be executed at the host.<br />

Chapter 5. DataHub Tools Conversation Connectivity 155


Figure 110. DataHub/2: OS/2 Component Relationship. This figure explains the<br />

relationship among the values for different components used in the DataHub/2<br />

environment.<br />

5.3.2 DataHub/2 <strong>Database</strong> Definitions<br />

We use the term “database definitions” to customize DataHub/2 because it is<br />

through OS/2 <strong>Database</strong> Manager database tables that DataHub/2 defines the<br />

different hosts it manages.<br />

This section shows how to add a managed host and a managed RDB to the<br />

DataHub/2 database. The important names you need to define in the DataHub/2<br />

database are:<br />

• Host name<br />

• Symbolic destination name<br />

• Host RDB name<br />

156 DataHub Implementation and Connectivity<br />

• OS/2 <strong>Database</strong> Manager database alias name


• DB2 subsystem name (MVS only).<br />

We assume that you have started the DataHub/2 workstation and have the<br />

DataHub/2 - Untitled window displayed at your workstation (see Figure 111).<br />

Figure 111 shows the different hosts that have been defined in our DataHub/2<br />

database. Note that the hosts are listed in ascending order.<br />

If you have not defined any hosts, your screen will be empty, but you should<br />

have the Host object displayed on the window.<br />

To add a host to the DataHub/2 database, you perform these steps:<br />

1. Select the Host object on the DataHub/2-Untitled window<br />

2. Select Configure from the action bar.<br />

The Configure pull-down menu will then appear.<br />

Figure 111. DH/2: Selecting Add on Configure Pull-down Menu<br />

3. Select Add... from the Configure pull-down menu.<br />

The Add Host window will appear (Figure 112 on page 158).<br />

4. Fill in the Host field.<br />

The Host field can have any value you want. The value will be used in the<br />

DataHub/2 user profile to associate a userid and/or password with the<br />

managed host for the tools conversation requests. The field should reflect<br />

the type of host system (MVS, VM, AS/400, OS/2). Most importantly, it should<br />

reflect the physical location of the host. The host name must be unique in<br />

the DataHub/2 database.<br />

Chapter 5. DataHub Tools Conversation Connectivity 157


Recommendation<br />

A good naming convention should be defined for:<br />

• Host name<br />

• Symbolic destination name<br />

• Partner LU name<br />

• Partner LU alias name<br />

• OS/2 <strong>Database</strong> Manager database alias name.<br />

That naming convention should be used on every DataHub/2 workstation,<br />

regardless of the DataHub/2 configuration being used.<br />

5. On the Add Host window, fill in the Symbolic destination field.<br />

The symbolic destination name can be a name of up to eight characters long<br />

and must be the same name you use in the OS/2 Communications Manager<br />

CPI-Communications side information. You should use the same name as the<br />

host name. To help you recognize the relationship with other definitions,<br />

however, we did not use the same name.<br />

6. Select your Operating System Type.<br />

7. Select the OK push button when you have completed your host definition.<br />

Figure 112. DH/2: Add Host Window<br />

158 DataHub Implementation and Connectivity<br />

The host that you added will appear on the DataHub/2 main menu.<br />

Now you must define the RDB on the new host. Note that multiple RDBs could<br />

exist on one host (for example MVS, VM, and OS/2).<br />

The steps to add an RDB to a host are:<br />

1. Select the host name from the DataHub/2 main menu.<br />

The Display Related Objects window will appear as shown in Figure 113 on<br />

page 159.<br />

2. Select the RDB line from the window and click on the Display push button.


Figure 113. DH/2: Display Related Objects Window<br />

The DataHub/2-Untitled window will be refreshed with an RDB object added<br />

under the new host.<br />

3. Select the RDB object and select Configure from the action bar.<br />

The Configure pull-down menu will appear. The DataHub/2 window should<br />

now look like that shown in Figure 114.<br />

4. Select Add... from the Configure pull-down menu.<br />

Figure 114. DH/2: Adding an RDB<br />

The Add RDB window will appear as shown in Figure 115 on page 160, but<br />

all the field values will be blank.<br />

In the RDB field, you will need to insert the DRDA RDB name of the target<br />

RDB. This name is exactly the same as the name used for DRDA<br />

connectivity. Chapter 3, “DRDA Connectivity” on page 31 explains in detail<br />

how this value is defined in each host system. This name must be unique in<br />

the DRDA network and in the DataHub/2 database. Even if the OS/2<br />

<strong>Database</strong> Manager does not support the DRDA server environment, a unique<br />

RDB name must be assigned to an OS/2 <strong>Database</strong> Manager database.<br />

In the OS/2 database field, enter the database alias name as defined in the<br />

OS/2 <strong>Database</strong> Manager system directory.<br />

Chapter 5. DataHub Tools Conversation Connectivity 159


In the Symbolic destination field, do not enter any value unless you have<br />

multiple DataHub <strong>Support</strong> components implemented on that host for<br />

performance or testing reasons. You enter a value in this field to specify a<br />

different symbolic destination name from that specified at the host level.<br />

If the host is an MVS host, the DB2 subsystem field will be enabled to enter<br />

the DB2 subsystem name.<br />

Figure 115 shows how we defined our DB2 RDB located in New York.<br />

5. Select the proper String Delimiter and Decimal Separator radio buttons.<br />

6. Click on the OK push button when you have completed the RDB definition.<br />

Figure 115. DH/2: Add RDB Window<br />

160 DataHub Implementation and Connectivity<br />

The refreshed DataHub/2-Untitled window will appear with the new RDB added<br />

under the new host.<br />

Figure 116 on page 161 shows all the hosts we have defined in the DataHub/2<br />

database for the ITSC network and their RDB names. The RDB shown as<br />

selected is the MVS DB2 RDB that we defined under the MNYCENT host.


Figure 116. ITSC Hosts and Their RDB Names<br />

Table 21 shows the DataHub/2 database definitions we used in the ITSC<br />

network. You will need to refer to these definitions as you continue with the<br />

DataHub/2 workstation configuration.<br />

Table 21. DataHub/2 <strong>Database</strong> Definitions Used in the ITSC Network<br />

DataHub/2 Host<br />

Name<br />

Symbolic<br />

Destination<br />

Name<br />

Operating<br />

System Type<br />

RDB Name OS/2 <strong>Database</strong> DB2 Subsystem<br />

ASJDLR1 SASJDLR1 AS/400 SJ400DLR1 ASJDB n/a<br />

MNYCENT SMNYCENT MVS DB2CENTDIST MNYCENT DB23<br />

OMTLDLR1 SMTLDLR1 OS/2 DBMMTLDLR1 OMTLDB n/a<br />

OPARDLR1 SPARDLR1 OS/2 DBMPARDLR1 OPARDB n/a<br />

ORIODLR1 SRIODLR1 OS/2 DBMRIODLR1 ORIODB n/a<br />

VTORDLR1 SVTORDR1 VM SQLLDLR01 VTORDB n/a<br />

5.3.3 OS/2 Communications Manager Definitions<br />

Depending on the DataHub configuration you select, the OS/2 Communications<br />

Manager could require many definitions on the DataHub/2 workstation. Unlike<br />

DRDA connectivity, where you could use a DDCS/2 gateway to get to all the<br />

managed hosts, the tools conversation protocol requires OS/2 Communications<br />

Manager definitions at each DataHub/2 workstation. If you are not using a<br />

DDCS/2 gateway but are using the LU 6.2 protocol for RDS connectivity, many of<br />

the DataHub tools conversation protocol requirements may be familiar to you.<br />

This section shows you how to define the OS/2 Communications Manager<br />

parameters specific to DataHub tools conversation connectivity. It considers the<br />

following definitions:<br />

• Local node characteristics<br />

Chapter 5. DataHub Tools Conversation Connectivity 161


• Communications link<br />

• Partner LU<br />

• CPI Communications side information: symbolic destination name<br />

• Mode name.<br />

162 DataHub Implementation and Connectivity<br />

Local Node Characteristics<br />

Each DataHub/2 workstation must be customized for SNA connectivity whatever<br />

the DataHub/2 configuration used. If you use the single DataHub/2 workstation<br />

configuration as in scenarios 1 and 2, your SNA definitions for DRDA connectivity<br />

are already complete. If your DataHub/2 workstation is using a DDCS/2 gateway<br />

and the NETBIOS protocol to access it (as in scenario 3), you need to define your<br />

workstation in the SNA network. This definition is necessary because the tools<br />

conversation protocol requires a session to be established between the<br />

DataHub/2 workstation and the DataHub <strong>Support</strong> components. If one of the<br />

managed hosts is an MVS or VM host, VTAM definitions will be required for each<br />

DataHub/2 workstation (see Figure 47 on page 74 and Figure 137 on page 184).<br />

The steps required to define the local node characteristics are presented in<br />

“IDBLK, IDNUM, Network ID, Logical Unit Name, SNA Control Point Name” on<br />

page 77. Table 10 on page 80 shows how each OS/2 workstation has been<br />

defined for the ITSC network.<br />

Communications Link<br />

On each DataHub/2 workstation, one link must be defined for each managed<br />

OS/2 and AS/400 host. Links are defined as peer nodes. Only one link is<br />

needed for MVS and VM managed hosts because VTAM and NCP take care of<br />

the routing between them.<br />

To define the communications links between the DataHub/2 workstation and the<br />

managed hosts, you should refer to “Partner Token-Ring Address and Logical<br />

Unit Name” on page 81. Table 11 on page 85 shows the values we used for the<br />

ITSC network. It is important to note that Table 11 on page 85 is related to<br />

DRDA connectivity. For DataHub tools conversation connectivity, the partner LU<br />

will be different. In a DRDA session, the partner LU is an RDB system. For a<br />

tools conversation session, the partner LU is the DataHub <strong>Support</strong> component.<br />

Partner LU<br />

For the MVS and VM managed hosts, a partner LU definition is required for<br />

every instance of DataHub <strong>Support</strong>. There could be multiple instances of<br />

DataHub <strong>Support</strong> in MVS and VM systems.<br />

For the AS/400, the partner LU used for the tools conversation protocol is the<br />

same as DRDA. For the OS/2, the partner LU used for the tools conversation<br />

protocol is shown on Table 11 on page 85. What differs is the transaction<br />

program name that will handle the request at the managed host. The<br />

transaction program name is defined in the symbolic destination name<br />

definitions discussed below.<br />

To define a partner LU, you need to get through the link definitions as explained<br />

in “Partner Token-Ring Address and Logical Unit Name” on page 81.<br />

Figure 117 on page 163 shows the Change Partner LUs window for the host link<br />

(MVS and VM hosts). You can see all the partner LUs already defined for DRDA<br />

and DataHub tools conversation connectivity. It shows how we add (change) the


DataHub <strong>Support</strong>/MVS partner LU. The partner LU name (EMQMACB1) must<br />

match the APPL name in the VTAM definition as shown in Figure 128 on<br />

page 174. When you add a partner LU in your link definition, the new name will<br />

appear in the LU name list box window.<br />

The partner LU Alias name can be any name, but it must have a significant<br />

name so that it can be easily referred to.<br />

Because the DataHub <strong>Support</strong> partner LU name (USIBMSC.EMQMACB1) or its<br />

alias name will be used later in the symbolic destination name definition, you<br />

should define the partner LU and the link before defining the symbolic<br />

destination name.<br />

Figure 117. Partner LU Definition for an SAA RDB System<br />

Symbolic Destination Name<br />

The symbolic destination name is an alias to a group of definitions related to a<br />

partner, which in this case is a DataHub <strong>Support</strong> component.<br />

You will need to define one symbolic destination name for every DataHub<br />

<strong>Support</strong> component in the network. If your configuration contains multiple<br />

DataHub/2 workstations, those definitions could be split among workstations.<br />

You do not need to define a symbolic destination name if you are not going to<br />

use it from your workstation.<br />

To define a symbolic destination name, start the OS/2 Communications Manager<br />

setup function and select the appropriate choices until you end up at the SNA<br />

Features List window (see Figure 118 on page 164). The steps required are<br />

listed in “IDBLK, IDNUM, Network ID, Logical Unit Name, SNA Control Point<br />

Name” on page 77. On the SNA Features List window, perform the following<br />

steps:<br />

Chapter 5. DataHub Tools Conversation Connectivity 163


1. Select the CPI Communications side information<br />

The list of all symbolic destination names already defined will appear in the<br />

Definition list box.<br />

2. Select the Create... push button or the definition with which you want to work<br />

(in this case SMNYCENT).<br />

3. Click on the Change... push button.<br />

Figure 118. SNA Features List Window (CPI Communications Side Information)<br />

164 DataHub Implementation and Connectivity<br />

4. The Change CPI Communications Side Information window will appear with<br />

the actual definition for the DataHub <strong>Support</strong>/MVS symbolic destination name<br />

in New York (see Figure 119 on page 165). You need to fill in the fields in<br />

this window.<br />

The Symbolic destination name field could contain any value, but the value<br />

must be the same as the one specified in the DataHub/2 database definition.<br />

For the Partner LU fields, you could use either the fully qualified name (that<br />

is, USIBMSC.EMQMACB1) or the partner LU alias name for the DataHub<br />

<strong>Support</strong> component. For the AS/400 and the OS/2 host, you use the LU<br />

names for those systems (see 3.6, “AS/400 Definitions” on page 60 and<br />

“IDBLK, IDNUM, Network ID, Logical Unit Name, SNA Control Point Name”<br />

on page 77).<br />

The TP name field is used to define the program that will process the request<br />

on the host side. The value can be explicit as in the AS/400, or it could refer<br />

to a definition on the host side. These are the values for the TP names for<br />

the different hosts:<br />

• MVS: can be any name


• VM: same as the value assigned to the RESOURCEID parameter in the<br />

EMQVSYS CONFIG DHS/VM configuration file<br />

• AS/400: QDMU/QSTSC (library name/program name)<br />

• OS/2: an alias to an OS/2 program name (for example, EMQ2T). EMQ2T<br />

must be defined in the OS/2 host as a transaction program. Both the<br />

alias and the program name must be the same.<br />

In the Mode name field, you enter the mode that will be used for that<br />

connection. It must have been defined through the SNA Features List<br />

window.<br />

In the Security type field, you could select Same or Program. DataHub/2<br />

will enforce program security and use the DataHub/2 user profile<br />

information to send the request to the appropriate host. If you select the<br />

Program security type from this window, you will have to click on the<br />

Continue... push button (which replaces the OK push button when you<br />

select Program security type) and fill in some security information on the<br />

Change CPI Communications Program Security window.<br />

Note: The security information you provide on the window in Figure 119<br />

will be overridden by the DataHub/2 user profile security information.<br />

Recommendation<br />

We recommend that you select security type = same. DataHub/2 will<br />

enforce program security and will use the DataHub/2 user profile for the<br />

security information.<br />

Figure 119. Change CPI Communications Side Information<br />

Chapter 5. DataHub Tools Conversation Connectivity 165


5. Click on the OK push button when you have completed the Change CPI<br />

Communications Side Information window.<br />

The SNA Features List window will reappear with the new symbolic<br />

destination name in the list box, if the requested function was to add a<br />

definition.<br />

6. Click on the Close push button when you have completed the symbolic<br />

destination name definitions. If required, you could add a specific mode<br />

name to your actual mode name definition by selecting the Modes line from<br />

the Features box.<br />

You should now go through different windows until your configuration is<br />

validated and your active configuration is updated.<br />

Table 19 on page 154 shows the symbolic destination names and all the other<br />

parameters we used to define the RDBs in our DataHub/2 configuration.<br />

Appendix A, “OS/2 Communications Manager Network Definition File” on<br />

page 259 shows the OS/2 NDF file that was created for our DataHub/2<br />

workstation OS/2 Communications Manager configuration.<br />

Mode Name<br />

If required, you could add a new mode to your mode list if the modes that are<br />

already defined (for example, IBMRDB) do not satisfy your requirements.<br />

You would use the procedure in “Mode Name” on page 85 to add a new mode<br />

to your system. Note that you will need to define the new mode on every host<br />

that will use it.<br />

5.3.4 OS/2 <strong>Database</strong> Manager Definitions<br />

166 DataHub Implementation and Connectivity<br />

One important parameter that must be defined when adding an RDB to the<br />

DataHub/2 database is the OS/2 database name. This OS/2 database name is<br />

the alias name defined in the OS/2 <strong>Database</strong> Manager system directory that<br />

points to the target managed RDB. Using that alias name and the different<br />

related entries in the different directories, OS/2 <strong>Database</strong> Manager will direct the<br />

connect request to the proper host.<br />

To gain an understanding of all of the relationships among OS/2 <strong>Database</strong><br />

Manager, DDCS/2, and OS/2 Communications Manager definitions, you should<br />

refer to 3.7.2, “Relationship between OS/2 Components” on page 70. From an<br />

OS/2 <strong>Database</strong> Manager perspective, DataHub/2 is another program that makes<br />

SQL requests, such as connect and select.<br />

Complete instructions for defining the OS/2 <strong>Database</strong> Manager system directory<br />

entries can be found in 3.7.5, “DRDA Definitions” on page 86. Those entries<br />

should be defined and tested before you even start to implement DataHub.<br />

Figure 120 on page 167 shows all the OS/2 <strong>Database</strong> Manager system directory<br />

definitions we used for the ITSC network. The MNYCENT alias corresponds to<br />

the DataHub/2 and DB2 RDB name, DB2CENTDIST.


Figure 120. OS/2: System <strong>Database</strong> Directory<br />

5.4 Configuring MVS for DataHub <strong>Support</strong>/MVS<br />

DataHub <strong>Support</strong>/MVS must be installed to allow DataHub/2 to manage the DB2<br />

databases at the host.<br />

DataHub <strong>Support</strong>/MVS supports users working at a DataHub/2 workstation by<br />

receiving the DataHub commands they initiated. Depending on the function that<br />

a user action triggers, DataHub might use different connectivity because there<br />

are two types of information flow between SAA DBMSs: the DRDA flow, which<br />

uses SNA LU 6.2, and the tools conversation flow, which is a private DataHub<br />

protocol that also uses LU 6.2. Some DataHub functions, such as DISPLAY RDB<br />

WORK and UTILITIES, use the tools conversation flow.<br />

This section focuses on the important parameters that must be defined and<br />

match OS/2 Communications Manager and <strong>Database</strong> Manager, VTAM, NCP, and<br />

DB2 system tables.<br />

5.4.1 Preparing for DataHub <strong>Support</strong>/MVS Installation<br />

The steps required to prepare for DataHub <strong>Support</strong>/MVS installation are listed<br />

below. More details can be found in Chapter 2 of the DataHub <strong>Support</strong>/MVS<br />

Installation and Operations Guide.<br />

The preparation steps are:<br />

1. Verify that the system hardware is at the required level.<br />

Chapter 5. DataHub Tools Conversation Connectivity 167


2. Verify that the system software products are the required release level.<br />

3. Determine which JES run class or classes will be used by DataHub<br />

<strong>Support</strong>/MVS and for tools you plan to add to the environment.<br />

4. Define two output hold classes for use by DataHub <strong>Support</strong>/MVS Tool<br />

Dispatcher and tools.<br />

5. Define the MVS data set high-level qualifier to RACF or to a functionally<br />

equivalent security package so that users can create, update, and delete<br />

data sets.<br />

6. Verify connectivity requirements between the DataHub <strong>Support</strong>/MVS host<br />

and DataHub/2 workstation.<br />

7. If running in a JES3 environment, ensure that the DataHub <strong>Support</strong>/MVS Tool<br />

Dispatcher is executing on the JES3 global processor.<br />

8. Determine whether there is a need, due to site-dependent restrictions or<br />

requirements, for a TSO OUTPUT command exit or a jobcard exit.<br />

5.4.2 Installing DataHub <strong>Support</strong>/MVS Platform and Tools Feature<br />

DataHub <strong>Support</strong>/MVS is installed using SMP/E. The installation does not differ<br />

from other IBM product installations; that is, create the JCL to read the tape,<br />

allocate the required SMP/E CSI, allocate the SMP/E data set and initialize the<br />

CSI zones, and use unloaded jobs for Receive, Apply, and Accept the product.<br />

The above process should be repeated twice: once for DataHub <strong>Support</strong>/MVS<br />

Platform installation and a once for the DataHub <strong>Support</strong>/MVS Tools Feature<br />

installation. The installation procedure is very easy, but it requires that your<br />

system programmer tailor and set up the MVS requirements.<br />

More details can be found in Chapter 3 of the DataHub <strong>Support</strong>/MVS Installation<br />

and Operations Guide as well as in the DataHub <strong>Support</strong>/MVS Program<br />

Directory.<br />

5.4.3 Configuring DataHub <strong>Support</strong>/MVS Environment<br />

168 DataHub Implementation and Connectivity<br />

This section discusses the configuration process and relates some of our<br />

experiences during our environment customization.<br />

EMQMCI01 CLIST<br />

After the SMP/E installation you need to customize DataHub <strong>Support</strong>/MVS. This<br />

is done by executing a configuration CLIST called EMQMCI01. The EMQMCI01<br />

CLIST will prompt you, as in the DB2 customization process, with panels where<br />

you should supply parameter information that will replace the defaults to reflect<br />

your environment.<br />

Before running the EMQMCI01 CLIST in our environment, we concatenated<br />

DataHub data sets in the TSO logon procedure (see Figure 121 on page 169).


........................<br />

........................<br />

/*****************************************************************/ +<br />

/* SYSPROC */ +<br />

/*****************************************************************/ +<br />

ALLOC FI(SYSPROC) SHR DA( +<br />

′ DSN230.NEW.DSNTEMP′ +<br />

′ DATAHUB.V110.SEMQMCLB′ +<br />

′ DSN230.DSNCLIST′ +<br />

........................<br />

........................<br />

/*****************************************************************/ +<br />

/* ISPPLIB */ +<br />

/*****************************************************************/ +<br />

ALLOC FI(ISPPLIB) SHR DA( +<br />

′ DSN230.LOCAL.PLIB′ +<br />

′ DATAHUB.V110.SEMQMPLB′ +<br />

′ STWALTE.DDB.PANEL.LIB′ +<br />

′ STL.TEST.PANELS′ +<br />

........................<br />

........................<br />

/*****************************************************************/ +<br />

/* ISPMLIB */ +<br />

/*****************************************************************/ +<br />

ALLOC FI(ISPMLIB) SHR DA( +<br />

′ DSN230.LOCAL.MLIB′ +<br />

′ DATAHUB.V110.SEMQMMLB′ +<br />

′ STWALTE.DDB.MESSAGE.LIB′ +<br />

........................<br />

........................<br />

Figure 121. EMQMCI01 CLIST DataHub <strong>Support</strong>/MVS Data Set Concatenation<br />

DataHub <strong>Support</strong>/MVS Data Sets<br />

During both the installation and customization process you should use the data<br />

sets listed in Figure 122 on page 170.<br />

Chapter 5. DataHub Tools Conversation Connectivity 169


DATAHUB.SMP.GLOBAL.CSI DATAHUB.V110.EMQJCL<br />

DATAHUB.SMP.GLOBAL.CSI.DATA DATAHUB.V110.INSTALIB<br />

DATAHUB.SMP.GLOBAL.CSI.INDEX DATAHUB.V110.SEMQMCLB<br />

DATAHUB.SMPLOG DATAHUB.V110.SEMQMDBR<br />

DATAHUB.SMPMTS DATAHUB.V110.SEMQMHDR<br />

DATAHUB.SMPPTS DATAHUB.V110.SEMQMJCL<br />

DATAHUB.SMPSCDS DATAHUB.V110.SEMQMLIB<br />

DATAHUB.SMPSTS DATAHUB.V110.SEMQMLMD<br />

DATAHUB.V110.AEMQMCLB DATAHUB.V110.SEMQMMLB<br />

DATAHUB.V110.AEMQMDBR DATAHUB.V110.SEMQMPLB<br />

DATAHUB.V110.AEMQMHDR DATAHUB.V110.SEMQMPRM<br />

DATAHUB.V110.AEMQMMLB DATAHUB.V110.SEMQMSAM<br />

DATAHUB.V110.AEMQMMOD DATAHUB.V110.SEMQMSLB<br />

DATAHUB.V110.AEMQMPLB DATAHUB.V110.STDB2C.A0060002.TRACE<br />

DATAHUB.V110.AEMQMSAM DATAHUB.V110.STDB2C.J31982.JESMSG<br />

DATAHUB.V110.AEMQMSLB DATAHUB.V110.STDB2C.J31982.TOOLMSG<br />

DATAHUB.V110.A006.TDTRACE<br />

DATAHUB.V110.EMQDB23.SYSDISC<br />

Figure 122. Installation Data Sets<br />

170 DataHub Implementation and Connectivity<br />

Customization Jobs<br />

After you have run the EMQMCI01 CLIST, members are included in the<br />

DATAHUB.V110.SEMQMJCL data set (see Figure 123). These members are jobs<br />

that have to be submitted to finalize the customization process. Most members<br />

are skeletons that will be used during DataHub <strong>Support</strong>/MVS functions and tool<br />

execution.<br />

EMQBKUP EMQJOB<br />

EMQDB2BB EMQLOAD<br />

EMQDB2BC EMQMAIN<br />

EMQDB2BD EMQRECOV<br />

EMQDB2BL EMQREORG<br />

EMQDB23B EMQUNLD<br />

EMQDB23C EMQUSTAT<br />

EMQDB23D EMQ000D<br />

EMQDB23L EMQ000E<br />

EMQJES EMQ0007<br />

Figure 123. DHS/MVS: DATAHUB.V110.SEMQMJCL Members<br />

When submitting the required jobs you will get an 806 abend (see Figure 124) if<br />

you did not include the C RUNTIME library SEDCLINK and the C common library<br />

SIBMLINK. The SIBMLINK PLI library contains the IBMBLIIA module.<br />

IEF403I STDB2CA - STARTED - TIME=15.32.59<br />

CSV003I REQUESTED MODULE IBMBLIIA NOT FOUND<br />

CSV028I ABEND806-04 JOBNAME=STDB2CA STEPNAME=QSRDB<br />

IEA995I SYMPTOM DUMP OUTPUT<br />

SYSTEM COMPLETION CODE=806 REASON CODE=00000004<br />

Figure 124. DHS/MVS: Abend 806, Module Not Found


DataHub <strong>Support</strong>/MVS Data Set APF Authorization<br />

DataHub <strong>Support</strong>/MVS needs to run as APF authorized for using JES and MVS<br />

commands and for performing cross-memory post. In our environment we<br />

included the DATAHUB.V110.SEMQMLIB data set in SYS1.PARMLIB(IEAAPF00)<br />

as shown in Figure 125.<br />

............... ......,<br />

............... ......,<br />

DATAHUB.V110.SEMQMLIB STDB2A,<br />

............... ......,<br />

............... ......,<br />

Figure 125. DHS/MVS Data Set APF Authorization<br />

More details can be found in Chapter 3 of the DataHub <strong>Support</strong>/MVS Installation<br />

and Operations Guide as well as in the DataHub <strong>Support</strong>/MVS Program<br />

Directory.<br />

5.4.4 Starting DataHub <strong>Support</strong>/MVS<br />

Once you have completed the preparation steps, you can start DataHub<br />

<strong>Support</strong>/MVS, either as a started task or a batch job. We used a batch job to<br />

start DataHub <strong>Support</strong>/MVS (see Figure 126 on page 172).<br />

Chapter 5. DataHub Tools Conversation Connectivity 171


STDB2AMQ JOB (999,POK),<br />

// ′ DATAHUB/MVS′,<br />

// CLASS=A,MSGCLASS=T,MSGLEVEL=(1,1),<br />

// TIME=1439,NOTIFY=&SYSUID<br />

//**************************************************************<br />

//*<br />

//* DATAHUB SUPPORT/MVS TOOL DISPATCHER<br />

//*<br />

//**************************************************************<br />

//EMQMAIN PROC PRM=EMQOUT#3,DCLS1=Y,TCLS=T,<br />

// OCLS=X,RS=6M<br />

//*<br />

//TD EXEC PGM=EMQMCA02,PARM=′/&PRM ′,REGION=&RS,<br />

// TIME=1440<br />

//STEPLIB DD DSN=DATAHUB.V110.SEMQMLIB,DISP=SHR<br />

//JCLLIB DD DSN=DATAHUB.V110.SEMQMJCL,DISP=SHR<br />

//PARMLIB DD DSN=DATAHUB.V110.SEMQMPRM,DISP=SHR<br />

//*<br />

//EMQMDUMP DD SYSOUT=&DCLS1<br />

//CEEDUMP DD SYSOUT=&DCLS1<br />

//TDTRACE DD SYSOUT=&TCLS,DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)<br />

//SYSPRINT DD SYSOUT=&TCLS<br />

//STDOUT01 DD SYSOUT=&OCLS<br />

//STDOUT02 DD SYSOUT=&OCLS<br />

//STDOUT03 DD SYSOUT=&OCLS<br />

//STDERR01 DD SYSOUT=&OCLS<br />

//STDERR02 DD SYSOUT=&OCLS<br />

//STDERR03 DD SYSOUT=&OCLS<br />

//SYSABEND DD SYSOUT=&DCLS1<br />

//PLIDUMP DD SYSOUT=&DCLS1<br />

//EMQMAIN PEND<br />

//WALTER EXEC EMQMAIN<br />

Figure 126. DHS/MVS Startup Job<br />

5.4.5 DataHub <strong>Support</strong>/MVS Connectivity Requirements<br />

172 DataHub Implementation and Connectivity<br />

Before you test DataHub <strong>Support</strong>/MVS you have to make sure that the<br />

connectivity requirements are in place. In order for DataHub/2 to communicate<br />

with the DataHub <strong>Support</strong>/MVS and DB2 subsystems that it manages you have to<br />

set the required DRDA and tools conversation connectivity.<br />

DRDA Connectivity<br />

In fact, we suggest that, before you begin to start to install DataHub, you have<br />

your DRDA connectivity implemented and tested. This will make it easy to<br />

determine the causes of eventual connectivity problems.<br />

DRDA connectivity is covered in detail in Chapter 3, “DRDA Connectivity” on<br />

page 31.


DataHub Tools Conversation Connectivity<br />

DataHub tools conversation connectivity, which is an LU 6.2 connection using<br />

DataHub tools conversation protocols, is required between DataHub/2 and<br />

DataHub <strong>Support</strong>/MVS. The Tool Dispatcher handles the communication<br />

functions for DataHub <strong>Support</strong>/MVS. It must be defined as a VTAM application.<br />

You define the Tool Dispatcher as you would your applications, that is, update<br />

the ATCCONnn member in your VTAMLST library to include the EMQMACB1<br />

application. You also need to define a LOGMODE that will be used to establish<br />

the characteristics of the session between DataHub/2 and DataHub <strong>Support</strong>/MVS.<br />

Refer to 5.1, “Key Connectivity Parameters” on page 152 and analyze Figure 110<br />

on page 156 to understand the connections made from DataHub/2 to DataHub<br />

<strong>Support</strong>/MVS.<br />

The application name and the logmode name are specified during the<br />

customization process. Refer to Figure 127 for the entries you want to activate<br />

when establishing sessions between the DataHub <strong>Support</strong>/MVS LU and the<br />

DataHub/2 LU.<br />

� �<br />

DataHub <strong>Support</strong>/MVS Release 1<br />

CONFIGURE TOOL DISPATCHER REQUIRED PARAMETERS<br />

===><br />

Enter data below:<br />

Started Task ===> EMQMAIN_ Name of procedure member<br />

Steplib ===> ____________________________________________<br />

STEPLIB for DataHub<br />

<strong>Support</strong>/MVS load module<br />

Trace Flags ===> 00110101101011010001 DataHub <strong>Support</strong>/MVS trace flags<br />

Application ===> EMQMACB1 VTAM application id<br />

Mode Name ===> EMQMLOGM VTAM logmode name<br />

Maximum Users ===> 25_ Maximum concurrent users<br />

JES Name ===> ____ JES subsystem name<br />

JES Character ===> _ JES command character<br />

� �<br />

Figure 127. DataHub <strong>Support</strong>/MVS APPL and LOGMODE Entries<br />

If the ACB name is specified on the VTAM APPL statement of the DataHub<br />

<strong>Support</strong>/MVS Tool Dispatcher, and it is different from the VTAM application<br />

name, then either the VTAM application name or the ACB name can be entered<br />

in the Application field in Figure 127. Nevertheless, the VTAM application name<br />

must be entered in the OS/2 Communications Manager partner LU name<br />

definition for DataHub <strong>Support</strong>/MVS (see Figure 117 on page 163).<br />

If the ACB name is not defined explicitly in the VTAM APPL definition, it will<br />

default to the VTAM application name, and in this case the VTAM application<br />

name will be entered in the Application field in Figure 127.<br />

You can find the DataHub <strong>Support</strong>/MVS VTAM APPL definition in the EMQAPPL<br />

member of the DATAHUB.V110.SEMQMSAM data set. In fact, the EMQAPPL<br />

Chapter 5. DataHub Tools Conversation Connectivity 173


174 DataHub Implementation and Connectivity<br />

member includes four VTAM APPL definitions. One VTAM APPL definition is<br />

required for each copy of DataHub <strong>Support</strong>/MVS you install. Refer to Figure 128<br />

on page 174 for the EMQMACB1 APPL definition.<br />

*<br />

VBUILD TYPE=APPL<br />

EMQMACB1 APPL ACBNAME=EMQMACB1,<br />

AUTH=(ACQ,PASS),<br />

APPC=YES,<br />

SONSCIP=YES,<br />

SECACPT=CONV,<br />

EAS=500,<br />

MODETAB=EMQMODE,<br />

AUTOSES=0<br />

Figure 128. DataHub <strong>Support</strong>/MVS Tool Dispatcher VTAM Application Name Definition for<br />

EMQMACB1 APPL<br />

You will also find in the DATAHUB.V110.SEMQMSAM data set the EMQMODE<br />

member that contains a modetab definition, which includes two logmodes. Refer<br />

to Figure 129 for the contents of the EMQMODE member.<br />

*/*<br />

*/* LU 6.2 ENTRY SNASVCMG - SERVICE MANAGER SESSIONS<br />

*/*<br />

EMQMODE MODETAB<br />

MODEENT LOGMODE=SNASVCMG, LOGON MODE NAME X<br />

TYPE=0, X<br />

FMPROF=X′13′, FUNCTION MANAGEMENT PROFILE X<br />

TSPROF=X′07′, TRANSMISSION SERVICES PROFILE X<br />

PRIPROT=X′ B0′, PRIMARY LOGICAL UNIT PROTOCOL X<br />

SECPROT=X′ B0′, SECONDARY LOGICAL UNIT PROTOCOL X<br />

COMPROT=X′ D0B1′, COMMON LOGICAL UNIT PROTOCOLS X<br />

RUSIZES=X′8585′, TRANSMISSION SERVICES USAGE FIELD X<br />

PSERVIC=X′060200000000000000000300′, X<br />

PSNDPAC=X′00′, X<br />

SRCVPAC=X′00′, X<br />

SSNDPAC=X′00′<br />

*<br />

* APPLICATION TO APPLICATION LOGMODES<br />

*<br />

MODEENT LOGMODE=EMQMLOGM, LOGON MODE NAME X<br />

TYPE=0, X<br />

FMPROF=X′19′, FUNCTION MANAGEMENT PROFILE X<br />

TSPROF=X′07′, TRANSMISSION SERVICES PROFILE X<br />

RUSIZES=X′8D8D′, TRANSMISSION SERVICES USAGE FIELD X<br />

PSERVIC=X′060200000000000000000300′, X<br />

PSNDPAC=X′00′, X<br />

SRCVPAC=X′00′, X<br />

SSNDPAC=X′00′<br />

*<br />

MODEEND<br />

END<br />

Figure 129. DataHub <strong>Support</strong>/MVS SNASVCMG and EMQMLOGM Logon Modes


When submitting the DataHub <strong>Support</strong>/MVS startup job, you might get an error<br />

like that shown in Figure 130 on page 175 if you did not define the DataHub<br />

<strong>Support</strong>/MVS Tool Dispatcher to VTAM by using the VTAM APPL statement.<br />

$HASP373 STDB2AMQ STARTED - INIT 3 - CLASS A - SYS XA01<br />

IEF403I STDB2AMQ - STARTED - TIME=14.51.59<br />

09 EMQM005C DataHub - Unable to open VTAM ACB. APPLID = EMQMACB1 RC =....<br />

R 9,END<br />

+EMQM025W DataHub - Internal Error - Refer To Trace File ′ DATAHUB.V11....<br />

+EMQM013I DataHub - termination is complete.<br />

...........................................<br />

...........................................<br />

Figure 130. DataHub <strong>Support</strong>/MVS Startup Error<br />

If you get a startup error, you have to pick up the generated APPL VTAM<br />

definition that has been included as the EMQAPPL member of the<br />

DATAHUB.V110.SEMQMSAM data set and include it in the VTAM definitions.<br />

Because your OS/2 system is connecting to the MVS system or because you<br />

want to manage RDBMSs that are located in that platform, you also need to<br />

define the OS/2 workstation to VTAM. Please refer to Chapter 3, “DRDA<br />

Connectivity” on page 31 for details on how to define your OS/2 workstation to<br />

VTAM.<br />

Please refer to 5.1, “Key Connectivity Parameters” on page 152 for the symbolic<br />

destination name definition that has to be made to enable DataHub/2 to connect<br />

to VTAM (SSCP) and the DataHub <strong>Support</strong>/MVS that will be accessed in the MVS<br />

environment.<br />

5.5 Configuring VM for DataHub <strong>Support</strong>/VM<br />

This section presents information that will help you understand how DataHub<br />

<strong>Support</strong>/VM works. First we briefly introduce the main components of DataHub<br />

<strong>Support</strong>/VM and explain the main DataHub <strong>Support</strong>/VM control files you have to<br />

set up to configure DataHub <strong>Support</strong>/VM. Then we describe how to start and<br />

operate DataHub <strong>Support</strong>/VM. Finally we discuss a performance consideration.<br />

5.5.1 VM/ESA Service Pool <strong>Support</strong> Facility<br />

The DataHub <strong>Support</strong>/VM platform uses the Service Pool <strong>Support</strong> Facility of<br />

VM/ESA. It is an efficient way of serving an environment in which host<br />

transactions are short. Figure 131 on page 176 shows a simple Service Pool<br />

environment with the resulting path from an APPC connection.<br />

Chapter 5. DataHub Tools Conversation Connectivity 175


Figure 131. VM Service Pool Environment<br />

176 DataHub Implementation and Connectivity<br />

What Is a Service Pool?<br />

A Service Pool consists of several identical virtual machines set up to handle<br />

transactions. Typically, these transactions would originate outside the VM/ESA<br />

system, such as from a workstation. AVS facilitates Service Pool management.<br />

A private gateway LU can have a Conversation Management Routine (CMR)<br />

associated with it whenever Service Pool <strong>Support</strong> is required. The CMR is<br />

commonly known as the Service Pool manager and is product dependent:<br />

DataHub <strong>Support</strong>/VM provides its own CMR, which controls the routing from a<br />

workstation to a Service Pool virtual machine through AVS.<br />

Theoretically a Service Pool can consist of up to 100,000 Service Pool virtual<br />

machines sharing the same Service Pool save segment. In Figure 131 the<br />

Service Pool consists of two virtual machines (identified as SPM00000 and<br />

SPM00001).<br />

Why Have a Service Pool?<br />

The Service Pool can increase performance for an environment in which host<br />

transactions are short. The design lets a set of virtual machines remain logged<br />

on, saving initialization time between transactions, with additional virtual<br />

machines logged on only on demand. A Service Pool provides:<br />

• An efficient means of accessing data existing at only one location (in a<br />

common saved segment area). As an alternative to one server, if the private<br />

server is busy, the system can go to another Service Pool machine for<br />

service.<br />

• A general method of supporting SNA-connected independent workstations<br />

that require many short requests for host services.


5.5.2 DataHub <strong>Support</strong>/VM Operating System Overview<br />

Figure 132 shows the operating system components that are required for<br />

DataHub <strong>Support</strong>/VM processing. The CMR is loaded into Group Control System<br />

(GCS) private storage, where it executes. AVS and VTAM run as applications<br />

under GCS. The Service Pool machines are private Service Pool machines that<br />

run under the Conversational Monitor System (CMS) operating system.<br />

Figure 132. DHS/VM Operating System Overview<br />

An explanation of these components is as follows:<br />

AVS Console Used to activate and deactivate gateways, monitor<br />

DataHub <strong>Support</strong>/VM gateways and conversations, and<br />

control GCS tracing.<br />

CMR1 The Conversation Management Routine that manages the<br />

local Service Pool machines (SPM00000 and SPM00001).<br />

The CMR selects a Service Pool machine ID to process the<br />

inbound request from DataHub/2. The CMR executes as<br />

an entity of AVS.<br />

DataHub/2 IBM SystemView Information Warehouse DataHub/2. The<br />

origin of DataHub requests. When an action is selected (for<br />

example, “Load”), a command is sent to DataHub<br />

<strong>Support</strong>/VM to perform it (or some part of it).<br />

GW1 Gateway LU. All inbound and outbound DataHub tools<br />

conversations for CP are processed through the gateway<br />

LU. One or more DataHub LUs can be defined to VTAM,<br />

each associated with a DataHub <strong>Support</strong>/VM CMR.<br />

Chapter 5. DataHub Tools Conversation Connectivity 177


SPM00000, SPM00001<br />

The IDs of Service Pool machines on this system that<br />

process requests from DataHub/2. These machines are<br />

private Service Pool machines that run private resource<br />

managers. Each Service Pool machine is capable of<br />

processing any DataHub request and is selected by CMR1<br />

only when available (not currently in use by another<br />

DataHub request).<br />

TH DataHub <strong>Support</strong>/VM Task Handler (private resource<br />

manager). The controlling routine of the VM host<br />

component that will process a DataHub/2 request. It<br />

controls the Service Pool machine in which the functions of<br />

the DataHub <strong>Support</strong>/VM tools feature execute to process<br />

DataHub requests. There is a predefined number of<br />

DataHub <strong>Support</strong>/VM Task Handlers, each executing in its<br />

own Service Pool machine and each processing one<br />

DataHub/2 request at a time.<br />

5.5.3 Configuring DataHub <strong>Support</strong>/VM Environment<br />

178 DataHub Implementation and Connectivity<br />

Configuration Steps<br />

Figure 133 on page 179 shows the different steps to configure the DataHub<br />

<strong>Support</strong>/VM environment. These steps are fully described in the DataHub<br />

<strong>Support</strong>/VM Installation and Operations Guide.


Summary of Steps to Configure the DataHub <strong>Support</strong>/VM Environment<br />

Notes:<br />

• Perform the steps in order.<br />

• Mandatory steps are preceded by squares (•).<br />

• Conditional steps are preceded by triangles (►).<br />

• Optional steps are preceded by circles (•).<br />

1. • Grant Authorizations to DataHub/2 Host User IDs<br />

a. • Grant authorization to SQL/DS databases.<br />

b. ► Grant authorization to production SFS directories.<br />

c. • Grant authorization to work SFS directories.<br />

2. • Set up AVS Environment<br />

a. • Determine number of Service Pool machines<br />

b. • Define the AVS virtual machine to GCS.<br />

c. • Define gateway LUs to VTAM.<br />

d. • Define/redefine AVS CP directory.<br />

e. • Redefine AVS machine PROFILE GCS (EXEC).<br />

f. • Redefine AVS machine AGWPROF GCS (EXEC).<br />

For each DataHub <strong>Support</strong>/VM gateway:<br />

a. • Create Service Pool configuration file.<br />

3. • Set up Service Pool Environment<br />

a. • Set up COMDIR NAMES file.<br />

b. ► Create EMQVPROF EXEC.<br />

For each Service Pool machine:<br />

a. • Create Service Pool machine CP directory entry.<br />

b. • Define Service Pool machine PROFILE EXEC.<br />

4. • Customize DataHub <strong>Support</strong>/VM Files<br />

a. • Customize system configuration file.<br />

b. • Customize tool table.<br />

c. • Customize EMQVTAPE EXEC.<br />

Figure 133. DHS/VM: Summary of Configuration Steps<br />

Chapter 5. DataHub Tools Conversation Connectivity 179


Environment Configuration Overview<br />

Figure 134 is an overview of the main control files required to configure DataHub<br />

<strong>Support</strong>/VM.<br />

Figure 134. DHS/VM Environment Configuration Overview<br />

180 DataHub Implementation and Connectivity<br />

Files and EXECs Relationship<br />

Figure 135 on page 181 shows the relationship between the DataHub<br />

<strong>Support</strong>/VM files and EXECs that must be created or customized during<br />

configuration of the DataHub <strong>Support</strong>/VM environment.


Figure 135. DHS/VM Files and EXECs Overview<br />

AGWPROF GCS The AVS machine′s AGWPROFILE EXEC can be used to<br />

automatically execute the FILEDEF statements for the<br />

Service Pool configuration file and the ACTIVATE and<br />

CNOS statements for DataHub <strong>Support</strong>/VM gateways.<br />

CMR DataHub <strong>Support</strong>/VM Conversation Management Routine<br />

CMR Alert File The DataHub <strong>Support</strong>/VM CMR writes a record to this file,<br />

EMQVCMR ALERT whenever it encounters an error<br />

situation.<br />

CSRV DataHub <strong>Support</strong>/VM Common Services<br />

DataHub <strong>Support</strong>/VM Alert File<br />

The components of DataHub <strong>Support</strong>/VM write to this file<br />

in error situations.<br />

DataHub <strong>Support</strong>/VM Trace File<br />

The components of DataHub <strong>Support</strong>/VM write trace<br />

records to this file.<br />

EMQVDCSS EXEC If the DataHub <strong>Support</strong>/VM Task Handler is to be loaded<br />

into a saved segment, CMS invokes this EXEC to:<br />

• Load the saved segment<br />

• Invoke the DataHub <strong>Support</strong>/VM Task Handler.<br />

EMQVINIT EXEC If the DataHub <strong>Support</strong>/VM Task Handler is to be loaded<br />

into the Service Pool machine′s nucleus extension, CMS<br />

invokes this EXEC to:<br />

• Issue a FILDEF for EMQVLOAD LOADLIB on the<br />

DataHub <strong>Support</strong>/VM production minidisk or SFS<br />

Chapter 5. DataHub Tools Conversation Connectivity 181


182 DataHub Implementation and Connectivity<br />

directory that contains the DataHub <strong>Support</strong>/VM Task<br />

Handler<br />

• Load the DataHub <strong>Support</strong>/VM Task Handler into the<br />

nucleus extension<br />

• Invoke the DataHub <strong>Support</strong>/VM Task Handler.<br />

EMQVPROF EXEC The PROFILE EXEC and the DataHub <strong>Support</strong>/VM Task<br />

Handler invoke the EMQVPROF EXEC if products required<br />

by DataHub <strong>Support</strong>/VM are installed in SFS directories.<br />

The EMQVPROF EXEC accesses SFS directories required<br />

for DataHub <strong>Support</strong>/VM operation.<br />

EMQVPUSH EXEC When the Service Pool machine is logged on, the last step<br />

of its PROFILE EXEC invokes the EMQVPUSH EXEC, which:<br />

• Determines the program to be given control in the<br />

Service Pool machine (by pushing either EMQVINIT or<br />

EMQVDCSS onto the program stack)<br />

• Reads the system configuration file to obtain the<br />

SYSADMIN, RESOURCEID, and DCSS keywords and<br />

passes their values to the EMQVINIT or EMQVDCSS<br />

EXECs.<br />

EMQVTAPE EXEC Tools can invoke this EXEC to request a tape to be<br />

mounted and attached.<br />

PROFILE EXEC Each Service Pool machine′s PROFILE EXEC invokes the<br />

EMQVPROF EXEC if products required by DataHub<br />

<strong>Support</strong>/VM are installed in SFS directories. The PROFILE<br />

EXEC also invokes the EMQVPUSH EXEC, which<br />

determines the program to be given control in the Service<br />

Pool machine.<br />

PROFILE GCS The AVS machine′s PROFILE EXEC contains the<br />

statements to access the DataHub <strong>Support</strong>/VM CMR<br />

production minidisk and the GLOBAL LOADLIB commands<br />

for the LOADLIB required to operate DataHub <strong>Support</strong>/VM.<br />

Service Pool Configuration File<br />

As part of the AGW ACTIVATE GATEWAY command issued<br />

by the DataHub <strong>Support</strong>/VM operator, the DataHub<br />

<strong>Support</strong>/VM CMR reads the Service Pool configuration file<br />

to obtain the user IDs of the Service Pool machines to use<br />

for DataHub requests.<br />

TH DataHub <strong>Support</strong>/VM Task Handler<br />

TOOL DataHub <strong>Support</strong>/VM Tools Feature<br />

Tool Table This file, EMQVTOOL CONFIG, describes all tools<br />

supported by DataHub <strong>Support</strong>/VM.<br />

System Configuration File<br />

This file, EMQVSYS CONFIG, is read by:<br />

• The EMQVPUSH EXEC to obtain the SYSADMIN,<br />

RESOURCEID, and DCSS keyword values<br />

• The DataHub <strong>Support</strong>/VM Task Handler to obtain<br />

configurable values.


Main DataHub <strong>Support</strong>/VM Control and EXEC File Examples<br />

VTAM: AVS LU Definition<br />

Figure 136 shows an example of the statements used to<br />

define a VTAM application for an AVS gateway to be used<br />

for DataHub <strong>Support</strong>/VM.<br />

**********************************************************************<br />

* AVS VTAM DEFINITION FOR CONNECTING DHS/VM *<br />

**********************************************************************<br />

AVSVM1 VBUILD TYPE=APPL<br />

*<br />

VDHTOR01 APPL APPC=YES, X<br />

AUTHEXIT=YES, X<br />

AUTOSES=10, X<br />

DLOGMOD=IBMRDB, X<br />

DSESLIM=100, X<br />

DMINWNL=50, X<br />

DMINWNR=50, X<br />

MODETAB=AGWTAB, X<br />

PARSESS=YES, X<br />

SYNCLVL=CONFIRM, X<br />

SECACPT=ALREADYV<br />

Figure 136. VTAM Sample AVS LU Definition for DataHub <strong>Support</strong>/VM<br />

AVS Profile Definition<br />

Gateways between VM and other platforms are defined in<br />

the AGWPROF GCS file invoked during AVS machine<br />

initialization. Some entries must be added for DataHub<br />

<strong>Support</strong>. They are highlighted in the Figure 137 on<br />

page 184.<br />

Chapter 5. DataHub Tools Conversation Connectivity 183


184 DataHub Implementation and Connectivity<br />

/* AGWPROF GCS which activates one gateway for global resource */<br />

/* communications and sets values for the session. */<br />

/* DataHub <strong>Support</strong> */<br />

′ FILEDEF VDHTOR01 DISK POOL CONFIG J′<br />

′ AGW ACTIVATE GATEWAY VDHTOR01 PRIVATE MANAGER EMQVCMG′<br />

′ AGW CNOS VDHTOR01 SJA2015I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2018I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2039I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2019I IBMRDB 10 5 5′<br />

′ AGW CNOS VDHTOR01 SJA2050I IBMRDB 10 5 5′<br />

/* */<br />

/* DRDA Gateways for AVS systems*********************************** */<br />

′ AGW ACTIVATE GATEWAY LUSQLVMA GLOBAL′<br />

′ AGW ACTIVATE GATEWAY LUSQLDB2 GLOBAL′<br />

/* Connection between LUSQLVMA and PS/2 Workstations ************** */<br />

′ AGW CNOS LUSQLVMA SJA2015I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2018I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2039I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2019I IBMRDB 10 5 5′<br />

′ AGW CNOS LUSQLVMA SJA2050I IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and AS/400 (SJA2054I)*************** */<br />

/* SJA2054I for other RDBMS: AS/400, LUSQLVMB,LUDB23,LUDB2B ******* */<br />

′ AGW CNOS LUSQLDB2 SJA2054I IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and a DB2 (LUDB23) ***************** */<br />

′ AGW CNOS LUSQLDB2 LUDB23 IBMRDB 10 5 5′<br />

/* Connection between LUSQLDB2 and a second DB2 (LUDB2B) ********** */<br />

′ AGW CNOS LUSQLDB2 LUDB2B IBMRDB 10 5 5′<br />

...<br />

...<br />

...<br />

Figure 137. DHS/VM: AGWPROF GCS Example<br />

Communication Directory Definitions<br />

Each Service Pool machine needs to define the same<br />

communication directory as defined in the DRDA<br />

environment (see Figure 138 on page 185).


* * * * * * * * * * * *<br />

*UCOMDIR NAMES file on STSQLB requester Machine.<br />

*** * * * * * * * * * *<br />

/* Entry to define the path to access the DB2 server DB2CENTDIST */<br />

:nick.DB23<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 LUDB23<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.DB2CENTDIST<br />

/* Entry to define the path to access the DB2 server DB2REGNDIST01 */<br />

:nick.DB2B<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 LUDB2B<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.DB2REGNDIST01<br />

/* Entry to define the path to access the AS/400 server SJ400DLR1 */<br />

:nick.AS400<br />

:tpn.″6DB<br />

:luname.LUSQLDB2 SJA2054I<br />

:modename.IBMRDB<br />

:security.SAME<br />

:dbname.SJ400DLR1<br />

/* Entry to define the path to access the Local SQL/DS server SQLLDLR01*/<br />

:nick.SQLLOCAL<br />

:tpn.SQLVMA<br />

:luname.*IDENT<br />

:dbname.SQLLDLR01<br />

Figure 138. DHS/VM: COMDIR NAMES File<br />

Service Pool Configuration File<br />

The Service Pool configuration file (Figure 139) contains<br />

the user IDs of the Service Pool machines available to<br />

DataHub <strong>Support</strong>/VM CMR. This file can have a file name<br />

and file type of your choosing, because a FILEDEF<br />

command is used to define this file in the AGWPROF GCS<br />

file (Figure 137 on page 184). The Service Pool<br />

configuration file is read only during gateway activation.<br />

* First record specifies number of detail records<br />

0002<br />

* start of detail records<br />

SPM00000<br />

SPM00001<br />

Figure 139. DHS/VM: Service Pool Configuration File<br />

System Configuration File<br />

The system configuration file, EMQVSYS CONFIG, is used<br />

by the DataHub <strong>Support</strong>/VM Task Handler to read any<br />

configurable values (see Figure 140 on page 186). The<br />

most important keyword is RESOURCEID. The name given,<br />

EMQVRSID, must contain the TP name used by DataHub/2<br />

when allocating a conversation to DataHub <strong>Support</strong>/VM.<br />

Chapter 5. DataHub Tools Conversation Connectivity 185


186 DataHub Implementation and Connectivity<br />

* DataHub <strong>Support</strong>/VM System Configuration File<br />

SYSADMIN=EMQVADM<br />

WORKMODE=H<br />

RESOURCEID=EMQVRSID<br />

Figure 140. DHS/VM: EMQVSYS CONFIG File<br />

Service Pool Machine PROFILE EXEC<br />

Each Service Pool machine needs a PROFILE EXEC (see<br />

Figure 141).<br />

/* PROFILE EXEC for Service Pool Machines */<br />

′ ACCESSM0 ON′<br />

′ SET MSG ON′<br />

′ SET IMSG ON′<br />

′ SET EMSG ON′<br />

′ SET WNG ON′<br />

′ SET RUN ON′<br />

′ ACCESS 19F Z′<br />

′ ACCESS 198 J′<br />

′ ACCESS 195 Q′<br />

′ GLOBAL TXTLIB IBMLIB CMSLIB EDCBASE VMLIB CMSSAA′<br />

′ GLOBAL LOADLIB EDCLINK EMQVLOAD ARISQLLD′<br />

′ SET SERVER ON′<br />

′ SET FULLSCREEN SUSPEND′<br />

′ SET AUTOREAD OFF′<br />

′ SET LANGUAGE AMENG ( ADD ARI USER′<br />

′ SET FILEPOOL VMSYS:′<br />

′ SET COMDIR FILE USER UCOMDIR NAMES *′<br />

′ ESTATE ARISRMBT MODULE A′<br />

′ IF RC = 28 THEN′<br />

′ SQLINIT DB(SQLLDLR01) PROTOCOL(AUTO) SSSNAME(ARIQSTAT)′<br />

′ EXEC EMQVPUSH′<br />

EXIT<br />

Figure 141. DHS/VM: PROFILE EXEC of SPM00000 and SPM00001 Machines<br />

Note: The SQLINIT DB(SQLLDLR01)... statement does not<br />

mean that this Service Pool machine can access only this<br />

database. SQLINIT is done only to set up the SQL/DS<br />

bootstrap modules. A transaction from DataHub/2 can run<br />

on any Service Pool machine against any database located<br />

at this host.<br />

Tool Table Example All tools supported by DataHub <strong>Support</strong>/VM are specified<br />

in the tool table, EMQVTOOL CONFIG (see Figure 142 on<br />

page 187). The DataHub <strong>Support</strong>/VM Task Handler reads<br />

the tool table to determine the tool program to invoke.<br />

The tool table is read only during Service Pool machine<br />

logon.


* Data <strong>Support</strong>/VM Tool Table<br />

0002 EMQ EMQVL00 01010 09-30-1993 Load Utility<br />

0004 EMQ EMQVR00 01010 09-30-1993 Reorg Utility<br />

0005 EMQ EMQVU00 01010 09-30-1993 Unload Utility<br />

0007 EMQ EMQVM00 01010 09-30-1993 Copy Utility<br />

000D EMQ EMQVN00 01010 09-30-1993 Display Status at Host<br />

000E EMQ EMQVQ00 01010 09-30-1993 Display Status at RDB<br />

3000 XMP XMPVTIME 01010 12-15-1993 Sample Tool<br />

Figure 142. DHS/VM: EMQVTOOL CONFIG File<br />

5.5.4 Operating DataHub <strong>Support</strong>/VM<br />

This section describes how to operate DataHub <strong>Support</strong>/VM.<br />

Activating DataHub <strong>Support</strong>/VM Gateways<br />

Figure 143 illustrates the flow for the AGW ACTIVATE GATEWAY command for<br />

gateway1. The steps of the communication flow are as follows:<br />

Figure 143. DHS/VM Communication Flow for ACTIVATE Command<br />

1. The AGW ACTIVATE GATEWAY and CNOS requests are issued from the AVS<br />

console at the VM host, or from the AGWPROF GCS file when the AVS<br />

machine is logged on.<br />

AVS receives the AGW ACTIVATE GATEWAY request, activates the gateway,<br />

and returns the “GATEWAY ACTIVATED” message to the AVS console.<br />

2. AVS Loads the DataHub <strong>Support</strong>/VM CRR (CMR1) into GCS private storage<br />

and invokes it with an ACTIVATE event.<br />

Chapter 5. DataHub Tools Conversation Connectivity 187


3. The DataHub <strong>Support</strong>/VM CMR (CMR1) reads and validates the Service Pool<br />

configuration file with a ddname matching the gateway name in the AGW<br />

ACTIVATE GATEWAY request.<br />

4. If there are no errors during validation, the DataHub <strong>Support</strong>/VM CMR<br />

(CMR1) invokes CP to log off all of the Service Pool machines. The message<br />

“userid LOGGED OFF” is displayed on the AVS console for each Service<br />

Pool machine that is logged off (“userid” is the Service Pool machine ID).<br />

The DataHub <strong>Support</strong>/VM CMR (CMR1) then returns control to AVS. The<br />

DataHub <strong>Support</strong>/VM CMR (CMR1) then returns control to AVS.<br />

5. AVS returns messages from the DataHub <strong>Support</strong>/VM CMR (CMR1) to the<br />

AVS console. See “DataHub <strong>Support</strong>/VM CMR Messages and Codes” in the<br />

DataHub <strong>Support</strong>/VM Installation and Operations Guide.<br />

If the gateway has been activated successfully and the DataHub <strong>Support</strong>/VM<br />

CMR did not return any error messages, the gateway is ready to receive<br />

requests from DataHub/2<br />

Allocating Conversations<br />

Once a DataHub <strong>Support</strong>/VM gateway is activated, communication between<br />

DataHub/2 and DataHub <strong>Support</strong>/VM can begin. Figure 144 illustrates the<br />

communication flow for an ALLOCATE conversation request. SPM00000 is the<br />

Service Pool machine user ID that will be autologged when needed.<br />

Figure 144. DHS/VM Communication Flow for ALLOCATE Command<br />

188 DataHub Implementation and Connectivity<br />

The following sequence of events illustrates the flow of an inbound DataHub/2<br />

request to a DataHub <strong>Support</strong>/VM host:<br />

1. A remote program issues an ALLOCATE conversation with a private DataHub<br />

resource. The remote program specifies a target LU, target transaction<br />

program name (TPN) pair as (GW1,TH). TH (Task Handler) is the resource


name specified in the system configuration file under the RESOURCEID<br />

keyword. GW1 is defined to VTAM as being owned by AVS, so VTAM routes<br />

the request to AVS.<br />

2. Because DataHub <strong>Support</strong>/VM CMR (CMR1) is associated with GW1, AVS<br />

invokes it.<br />

3. The DataHub <strong>Support</strong>/VM CMR (CMR1) passes back to AVS the selected<br />

Service Pool machine ID to be used by the conversation. The Service Pool<br />

machine selected can be any machine from the Service Pool configuration<br />

file that is available (not currently in use by another DataHub request). In<br />

Figure 144 on page 188, CMR1 passes back SPM00000 as the Service Pool<br />

machine ID.<br />

4. AVS translates the incoming connection request to an APPC/VM CONNECT<br />

request for a private resource TH located at SPM00000.<br />

5. CP verifies that the DataHub/2 user ID and password pair that was sent is<br />

valid. If the pair is not valid, the connection request is rejected and a<br />

connection is not established. If the pair is valid, CP routes the verified user<br />

ID to SPM00000.<br />

6. If SPM00000 is not logged on, CP automatically logs it on and executes the<br />

PROFILE EXEC.<br />

7. CMS invokes either the EMQVINIT or EMQVDCSS EXEC. In this example, it<br />

invokes the former, which loads the DataHub <strong>Support</strong>/VM Task Handler from<br />

the EMQVLOAD LOADLIB. (The EMQVDCSS EXEC would load the DataHub<br />

<strong>Support</strong>/VM Task Handler from a saved segment.)<br />

8. SPM00000 DataHub <strong>Support</strong>/VM Task Handler establishes a conversation<br />

with the remote DataHub/2 program.<br />

Completing Conversations<br />

Once the conversation with the remote DataHub/2 is established, the<br />

conversation can be completed. Figure 145 on page 190 illustrates the<br />

conversation flow for the remainder of the conversation.<br />

Chapter 5. DataHub Tools Conversation Connectivity 189


Figure 145. DHS/VM Communication Flow for Completing Conversations<br />

5.5.5 Performance Consideration<br />

190 DataHub Implementation and Connectivity<br />

To complete the conversation, the following sequence of events occurs:<br />

1. Data passes back and forth between DataHub/2 and the DataHub<br />

<strong>Support</strong>/VM Task Handler in the Service Pool machine, using the services of<br />

VTAM, AVS, CP, and CMS. The DataHub <strong>Support</strong>/VM CMR (CMR1) is not<br />

invoked again until conversation deallocation.<br />

2. The DataHub <strong>Support</strong>/VM Task Handler receives the request from DataHub/2<br />

and determines which tool to invoke.<br />

3. The tool processes the request (it may invoke SQL/DS) and returns control to<br />

the Task Handler.<br />

4. The Task Handler returns a message to DataHub/2 indicating that the tool<br />

has completed.<br />

The main performance consideration in the DataHub <strong>Support</strong>/VM platform is the<br />

number of Service Pool machines associated with a gateway through one CMR.<br />

This number depends on:<br />

• Number of DataHub/2 users attached to the VM host<br />

• Length of time for the average request (For example, requests for Copy Data<br />

and Utilities will run longer than requests for Display Status; therefore more<br />

Service Pool machines are required for these types of requests.)<br />

• Maximum number of tasks running at the same time<br />

• Future expansion considerations.


Recommendations<br />

• Initially, it is recommended that there be at least 1 Service Pool machine<br />

for 1-10 DataHub/2 workstations connected to the VM host.<br />

• If the usage on VM host is frequent, 2 Service Pool machines are<br />

recommended for each 2-5 DataHub/2 workstations.<br />

• If there are a large number of DataHub/2 workstations and the usage on<br />

the VM host is infrequent, 1 Service Pool machine is recommended for<br />

each 1-25 DataHub/2 workstations.<br />

Note: Having more Service Pool machines than needed does not incur additional<br />

CPU overhead (except for their maintenance), because there will only be as<br />

many Service Pool machines running as needed during the peak period.<br />

You can monitor Service Pool machine usage by referring to the CMR alert file,<br />

EMQVCMR ALERT, on the AVS machine′s A-disk. If there are not enough Service<br />

Pool machines available to process DataHub/2 requests, this file will contain<br />

many occurrences of the following record:<br />

WARNING: Service Pool machine not available. Symptom data is: gateway name.<br />

5.6 Configuring AS/400 for DataHub <strong>Support</strong>/400<br />

The SystemView Information Warehouse DataHub <strong>Support</strong>/400 program product<br />

handles the CPI-C conversation with the DataHub/2 workstation and coordinates<br />

requests to function on the AS/400 host.<br />

We do not cover all the installation and configuration tasks in detail because<br />

they are explained in the DataHub <strong>Support</strong>/400 Installation and Operations Guide.<br />

The tasks required to install and configure DataHub <strong>Support</strong>/400 are:<br />

• Configure APPC.<br />

• Configure DRDA.<br />

• Install DataHub <strong>Support</strong>/400.<br />

5.6.1 AS/400 Network Definition<br />

• Verify the DataHub <strong>Support</strong>/400 installation.<br />

• Create the QEMQUSER collection.<br />

• Bind DataHub/2 to the AS/400,<br />

• Bind DataHub <strong>Support</strong>/400 to the other hosts (if necessary).<br />

• Bind other DataHub <strong>Support</strong> hosts to the AS/400 host.<br />

• Configure DataHub <strong>Support</strong>/400 as a prestart job (optional).<br />

The detailed steps to define the AS/400 in the network are described in 3.6,<br />

“AS/400 Definitions” on page 60.<br />

Chapter 5. DataHub Tools Conversation Connectivity 191


5.6.2 Configuration Considerations<br />

DataHub <strong>Support</strong>/400 configuration is very simple. This section gives some<br />

important tips that clarify some AS/400 caracteristics.<br />

AS/400 RDBMS Considerations<br />

The AS/400 has an integrated relational database. Therefore an RDBMS does not<br />

have to be installed.<br />

DataHub/2 and all DataHub products can only access or manage OS/400<br />

database objects that are located in an OS/400 collection.<br />

QEMQUSER Collection<br />

After the installation of DataHub <strong>Support</strong>/400 you must execute the<br />

QDMU/QSTUTCCL program. The program creates the QEMQUSER collection.<br />

The QSTUTCCL program does not require the SAA SQL/400 program product.<br />

To execute this program enter the CALL QDMU/QSTUTCCL. command.<br />

Binding DataHub <strong>Support</strong>/400 to Other Hosts<br />

To use all the DataHub/2 functions, the DataHub <strong>Support</strong>/400 program must be<br />

bound to all other RDBMSs. To do this, sign on to the AS/400 system as<br />

QEMQUSER user and enter the following control language command:<br />

CRTSQLPKG QDMU/QSTCSQL RDB(xxxxxxxx)<br />

where xxxxxxxx is the name of the RDBMS to which you want to bind DataHub<br />

<strong>Support</strong>/400.<br />

DataHub <strong>Support</strong>/400 as a Prestart Job<br />

DataHub <strong>Support</strong>/400 can optionally be set up as a prestart job. This means that<br />

the DataHub <strong>Support</strong>/400 task is ready to begin processing whenever DataHub/2<br />

requests work to be done. The prestart job provides better performance when<br />

DataHub/2 accesses the AS/400 system.<br />

To use DataHub <strong>Support</strong>/400 as a prestart job, do the following:<br />

1. Determine the subsystem in which DataHub <strong>Support</strong>/400 jobs run. The IBM<br />

supplied defaults are QBASE and QCMN.<br />

If you want to set up a separate subsystem for DataHub <strong>Support</strong>/400,<br />

consider the following:<br />

a. You need to have communications entries in the subsystem for each of<br />

the DataHub/2 workstation services running DataHub/2.<br />

b. You need to add the prestart job entries to the subsystem.<br />

c. You must start the subsystem before you start QCMN or QBASE.<br />

2. Stop the subsystem.<br />

3. Add a prestart job entry as follows:<br />

ADDPJE SBSD(subsystem_name) PGM(QDMU/QSTSC)<br />

The other parameters can be varied, depending on the expected use of the<br />

product and your environment.<br />

Refer to the AS/400 Work Management Guide, SC41-8078, for more<br />

information on prestart jobs.<br />

4. Start the subsystem.<br />

192 DataHub Implementation and Connectivity


5.7 Configuring OS/2 for DataHub <strong>Support</strong>/2<br />

5.7.1 OS/2 Host Components<br />

This section covers all the OS/2 definitions that allow the DataHub/2 workstation<br />

to manage OS/2 <strong>Database</strong> Manager databases. It shows how to customize the<br />

different components used for DataHub tools conversation connectivity. Because<br />

the OS/2-managed host must be a participant in the DRDA and the RDS network,<br />

you should have already implemented this OS/2 host in that network. (See 3.7,<br />

“OS/2 Definitions” on page 69.)<br />

This section does not cover how to install the DataHub <strong>Support</strong>/2 component.<br />

You should refer to the DataHub <strong>Support</strong>/2 Installation and Operations Guide for<br />

more information.<br />

The same prerequisite components are required on the DataHub <strong>Support</strong>/2 host<br />

as on the DataHub/2 workstation. The only exception is that DataHub/2 requires<br />

the server version of OS/2 <strong>Database</strong> Manager because the DataHub/2<br />

workstation will need to access that machine as a client using the RDS protocol.<br />

The DDCS/2 software is not required if this OS/2 host does not connect to a<br />

DRDA server to transfer data from or to it (for example, the DataHub copy data<br />

operation).<br />

What is different from a DataHub/2 workstation are the definitions that will be<br />

required. The DataHub/2 workstation needs to know all the hosts that are<br />

managed by the DataHub/2 product. The OS/2-managed host needs to know<br />

only the host(s) to which data will be transferred in a copy operation.<br />

If you use one host as a data repository from where you copy data, all the<br />

managed hosts have to know that repository host and need to connect to it. If<br />

you use the OS/2 <strong>Database</strong> Manager system to capture data that eventually will<br />

be transfered to a central site, the OS/2 <strong>Database</strong> Manager system needs to<br />

know only one system.<br />

OS/2 Host Component Relationship<br />

Another difference between the DataHub/2 workstation and the DataHub<br />

<strong>Support</strong>/2 hosts is the relationship between the different components.<br />

Figure 146 on page 194 shows the key connectivity parameters relationship on<br />

the OS/2-managed host. It shows only the tools conversation flow on this<br />

workstation. To understand the RDS connectivity, you should refer to Figure 46<br />

on page 71 and follow the Inbound request: CONNECT.<br />

The tools conversation request that has been sent by the DataHub/2 workstation<br />

contains the following information:<br />

• Logical unit name<br />

• Network ID<br />

• Mode name to use<br />

• Transaction program name that will process the request locally<br />

• Data that will contain the RDB name.<br />

The program that is associated with the transaction program name will execute<br />

the request and connect to the OS/2 <strong>Database</strong> Manager database using the RDB<br />

name mapping information provided in the DataHub <strong>Support</strong>/2 map file.<br />

Chapter 5. DataHub Tools Conversation Connectivity 193


The map file name and location are defined by the RDBNAMEMAP parameter in<br />

the DataHub <strong>Support</strong>/2 configuration file. The configuration file name and<br />

location are defined by the SET EMQ2SYS parameter in the OS/2 CONFIG.SYS<br />

file.<br />

Figure 146. DHS/2: OS/2 Component Relationship Diagram<br />

5.7.2 DataHub <strong>Support</strong>/2 Customization<br />

Three files need to be customized to enable DataHub <strong>Support</strong>/2 to work properly:<br />

• OS/2 configuration file (CONFIG.SYS)<br />

• DataHub <strong>Support</strong>/2 configuration file<br />

• DataHub <strong>Support</strong>/2 map file.<br />

194 DataHub Implementation and Connectivity


OS/2 Configuration File (CONFIG.SYS)<br />

When you install the DataHub <strong>Support</strong>/2 product, many lines are updated, and<br />

this line is inserted:<br />

..............<br />

SET EMQ2SYS=\EMQ2SYS.CFG<br />

..............<br />

where indicates the directory in which the DataHub <strong>Support</strong>/2<br />

product is installed. This line sets up the OS/2 system variable EMQ2SYS so<br />

that, when the DataHub protocol handler program is started by OS/2<br />

Communications Manager, the program knows where to find the DataHub<br />

<strong>Support</strong>/2 configuration file.<br />

DataHub <strong>Support</strong>/2 Configuration File<br />

The DataHub <strong>Support</strong>/2 configuration file contains many entries. One entry is<br />

very important for connectivity purposes: the RDBNAMEMAP parameter. This<br />

parameter points to the map file that associates a DataHub/2-defined RDB name<br />

with an OS/2 <strong>Database</strong> Manager database alias name.<br />

Note: Because this entry is not included during install, you need to add it<br />

yourself. Figure 147 shows the definitions we used.<br />

* DataHub <strong>Support</strong>/2 Configuration File<br />

TOOLTABLE=D:\EMQ2\EMQ2TOOL.CFG<br />

ALERTPATH=D:\EMQ2\ALERT<br />

TRACEPATH=D:\EMQ2\TRACE<br />

RDBNAMEMAP=D:\EMQ2\DHS2.MAP<br />

Figure 147. DHS/2: Configuration File (EMQ2SYS.CFG)<br />

DataHub <strong>Support</strong>/2 RDB Name Map File<br />

A map file is necessary to map an RDB name to an OS/2 <strong>Database</strong> Manager<br />

database alias name. If the RDB name and the database alias name are the<br />

same, no entry is necessary. Figure 148 shows you the map file we used for the<br />

ITSC network. All the RDB names are defined because we want to be able to<br />

copy from one RDB name to another. In a real production environment, you<br />

might need only one or two entries.<br />

The first column represents the RDB name as defined in the DataHub/2<br />

database. The second column is the OS/2 <strong>Database</strong> Manager database alias<br />

name. You should compare this figure with the System <strong>Database</strong> Directory<br />

window in Figure 120 on page 167.<br />

DB2CENTDIST MNYCENT<br />

DBMMTLDLR1 OMTLDB<br />

DBMRIODLR1 ORIODB<br />

DBMPARDLR1 OPARDB<br />

SQLDDLR01 VTORDB<br />

SJ400DLR1 ASJDB<br />

Figure 148. Map File (DHS2.MAP)<br />

Chapter 5. DataHub Tools Conversation Connectivity 195


5.7.3 OS/2 Communications Manager Definitions<br />

The OS/2 Communications Manager is required on a DataHub <strong>Support</strong>/2 host.<br />

This section explains which components you need to customize. It also explains<br />

in detail the two components that are specific for a DataHub <strong>Support</strong>/2<br />

implementation.<br />

First you need to add this host to the SNA network. In addition to the physical<br />

connection (which we do not discuss in this document except for the token-ring<br />

connection), you will need to customize OS/2 Communications Manager for the<br />

proper SNA definitions. Specifically, you need to:<br />

• Define the local node characteristic.<br />

• Add or change a conncection to a host, which includes defining the partner<br />

logical unit (remote RDB).<br />

• Add mode (if necessary).<br />

These definitions are explained in detail in “IDBLK, IDNUM, Network ID, Logical<br />

Unit Name, SNA Control Point Name” on page 77. The following definitions are<br />

specific to DataHub <strong>Support</strong>/2 and are explained below:<br />

• Transaction program name<br />

• Conversation security.<br />

Transaction Program Name<br />

The transaction program name is an alias for a DataHub <strong>Support</strong>/2 program<br />

name. The alias can be any name, but it must be the same name as the name<br />

used in the symbolic destination name definition. The example shown in<br />

Figure 119 on page 165 is for the DataHub <strong>Support</strong>/MVS component, which<br />

could be any name. For the DataHub <strong>Support</strong>/2 component the TP name must<br />

be the same at both ends (that is, the DataHub/2 workstation and the DataHub<br />

<strong>Support</strong>/2 workstation).<br />

To add a transaction program definition, you need to start the OS/2<br />

Communications Manager setup program and go through the windows until you<br />

get to the SNA Features List window. This procedure is documented in “IDBLK,<br />

IDNUM, Network ID, Logical Unit Name, SNA Control Point Name” on page 77.<br />

Then follow this procedure:<br />

196 DataHub Implementation and Connectivity<br />

1. Select the Transaction program definitions line from the SNA Features List<br />

window.<br />

The already defined transaction program will appear in the list box. As can<br />

be seen in Figure 149 on page 197, no definition exists.


Figure 149. SNA Features List Window (Transaction Program Definitions)<br />

2. Click on the Create... push botton.<br />

The Create a Transaction Program Definition window will the appear.<br />

3. Fill in the appropriate information.<br />

The transaction program (TP) name alias must be the same name as defined<br />

in the DataHub/2 workstation.<br />

The OS/2 program path and file name must indicate the directory in which<br />

DataHub <strong>Support</strong>/2 is installed. The name of the program for the actual<br />

version of DataHub <strong>Support</strong>/2 is EMQ2T.EXE.<br />

You also need to specify conversation security if you want to protect the<br />

execution of this program from a remote location. You will define later what<br />

security will be used for this TP program. Figure 150 on page 198 shows the<br />

TP name definition we used for all our OS/2 managed hosts.<br />

4. Click on the Continue... push button when you have finished.<br />

The Create Additional TP Parameters window will then appear (Figure 151<br />

on page 198).<br />

Chapter 5. DataHub Tools Conversation Connectivity 197


Figure 150. Create a Transaction Program Definition Window<br />

5. Click on the Background radio button in the Presentation type field.<br />

6. Click on the Non-queued, Attach Manager started radio button in the<br />

Operation type field.<br />

7. Click on the OK push button.<br />

198 DataHub Implementation and Connectivity<br />

The SNA Features List window (Figure 149 on page 197) should then<br />

reappear with the new TP name added to the list box.<br />

Figure 151. Create Additional TP Parameters Window<br />

Conversation Security<br />

Conversation security protects access to local ressources from a remote system.<br />

The definition simply tells OS/2 Communications Manager to use the OS/2 User<br />

Profile Management security system instead of specific userid and password<br />

definitions.<br />

The conversation security specifications are done from the SNA Features List<br />

window. These are the steps to be performed from that window:


1. Select the Conversation security line.<br />

The list of userids defined will be shown in the list box. As shown in<br />

Figure 152, none is defined, and we will need to create one entry.<br />

2. Click on the Create... push button.<br />

Figure 152. SNA Features List Window (Conversation Security)<br />

The Create Conversation Security window will then appear (see Figure 153<br />

on page 200).<br />

3. Click on the Utilize User Profile Management check box.<br />

A generic ID (*) will then appear in the User ID field as shown in Figure 152.<br />

4. Click on the Add push button.<br />

Chapter 5. DataHub Tools Conversation Connectivity 199


Figure 153. CM/2: Create Conversation Security Window<br />

A refreshed Create Conversation Security window will appear with the<br />

generic ID (*) to indicate that the UPM security system is used for<br />

conversation security (Figure 154 on page 201).<br />

5. Click on the OK push button.<br />

200 DataHub Implementation and Connectivity


Figure 154. Create Conversation Security Window (After Definition)<br />

The SNA Features List window will reappear with the generic ID added to the<br />

list box.<br />

5.7.4 OS/2 <strong>Database</strong> Manager Definitions<br />

Your OS/2 <strong>Database</strong> Manager directories must be customized to enable<br />

DataHub/2 and DataHub <strong>Support</strong>/2 to connect. OS/2 <strong>Database</strong> Manager<br />

directory entries are necessary for:<br />

• Each local database managed through the DataHub/2 workstation<br />

• Each remote RDBMS to which this system will connect for the copy function.<br />

Most OS/2 <strong>Database</strong> Manager customers use the same database name for<br />

databases that are spread throughout a network. For each of the OS/2 <strong>Database</strong><br />

Manager managed databases, two entries are required in the System <strong>Database</strong><br />

directory:<br />

1. One entry with the same value for the alias and the database name. This<br />

entry is added when the database is created.<br />

2. One alias to the real database name. This alias must be unique in the<br />

DataHub/2 network. A nomenclature must be decided on in order to help<br />

you define those aliases.<br />

Refer to 3.7.5, “DRDA Definitions” on page 86 for the different steps involved in<br />

defining OS/2 <strong>Database</strong> Manager directories.<br />

Chapter 5. DataHub Tools Conversation Connectivity 201


5.8 Recommendations<br />

We recommend that you use a meaningful naming convention for every name<br />

you create. Establish a plan to define the names that you will use in the OS/2<br />

Communications Manager and the hosts with which you want to connect and<br />

identify the names that must match. The object names you must consider are:<br />

• Host name<br />

• RDB name<br />

• Symbolic destination name<br />

• LU name<br />

• Partner LU alias name<br />

202 DataHub Implementation and Connectivity<br />

• OS/2 DBM database alias name.<br />

We recommend that your organization′s database administrator (that is, the<br />

DataHub/2 user) coordinate the installation and testing of the DataHub <strong>Support</strong><br />

components . to ensure that the local DataHub/2 workstation definitions match<br />

the managed host definitions.


Chapter 6. Problem Determination<br />

6.1 Implementation Strategy<br />

This chapter provides guidelines to help you perform problem determination. As<br />

a DataHub/2 user, you probably will be the first to discover that a problem exists.<br />

Some problems, such as those caused by incorrect local definitions, will be easy<br />

to resolve locally. This chapter explains the tools you have on your DataHub<br />

<strong>Support</strong> workstation to perform problem determination.<br />

Usually, the DataHub/2 user is a database administrator who knows what to do<br />

with RDBMS problems. But problem determination in a complex environment<br />

like distributed databases will be a team effort. The DataHub/2 user will need<br />

help from host personnel.<br />

Host personnel have many tools and logs they can use to help DataHub/2 users<br />

perform their problem determination tasks. These tools and logs are discussed<br />

in the host problem determination sections later in this chapter.<br />

Despite the availability of host problem determination aids, we recommend that<br />

you have a Help Desk to assist you in determining the source of network<br />

problems.<br />

Before describing problem determination, let us review some key<br />

implementation strategy recommendations. A good implementation strategy will<br />

eliminate many of the problems that you could encounter later with DataHub.<br />

Before implementing your DataHub environment, we highly recommend that you<br />

install and test, step by step, all components starting with the lowest level of<br />

security. We suggest, for example, that you use the same userid and password<br />

for all the RDBMS hosts and the DataHub/2 workstation. To avoid problems<br />

related to security and authorization checking, at least when you get started, that<br />

userid should have database administration privileges on all RDBMS systems.<br />

Let us assume that all RDBMSs are operational and used locally by applications.<br />

Your distributed databases may not be implemented yet, because of lack of<br />

remote management tools like the DataHub family of products. In this case, your<br />

distributed systems would be installed in a development environment for<br />

application development and testing.<br />

The principal steps we highly recommend are:<br />

1. Install the DRDA and/or RDS environment on one RDBMS host.<br />

2. Install the DRDA and/or RDS environment on the DataHub/2 workstation.<br />

3. Test the DRDA and/or RDS connectivity between the RDBMS host and the<br />

DataHub/2 workstation.<br />

4. Install the DRDA and/or RDS environment on all the other RDBMS hosts that<br />

will be managed.<br />

5. Test the DRDA and/or RDS connectivity between the new RDBMS hosts and<br />

the DataHub/2 workstation.<br />

© Copyright IBM Corp. 1993 203


6.2 DataHub/2 Workstation<br />

204 DataHub Implementation and Connectivity<br />

6. If a function such as Copy Data is to be used, customize and test the DRDA<br />

and/or RDS environment for RDBMS-to-RDBMS connectivity for the hosts<br />

that will be involved in the copy operation.<br />

Now that the DRDA and/or RDS environment is operational, you could start<br />

implementing the DataHub products:<br />

1. Install and customize DataHub/2 on the DataHub/2 workstation.<br />

2. Test the DataHub/2 customization using a local database (for example,<br />

SAMPLE created using the OS/2 <strong>Database</strong> Manager SQLSAMPL program).<br />

3. Test the DataHub/2 customization using a remote RDBMS.<br />

4. Install one of the DataHub <strong>Support</strong> components on one of the hosts.<br />

5. Customize the DataHub <strong>Support</strong> host. Then test the connection between the<br />

DataHub/2 component on the DataHub/2 workstation and the new DataHub<br />

<strong>Support</strong> component. We suggest that you use:<br />

• Run JCL Function with DataHub <strong>Support</strong>/MVS<br />

• RDBMS WORK with DataHub <strong>Support</strong>/VM and DataHub <strong>Support</strong>/400<br />

• Copy Function with DataHub <strong>Support</strong>/2.<br />

See “Appendix C: Verifying your DataHub installation” in the DataHub/2<br />

Installation and Administration Guide for more information.<br />

6. Install the other DataHub <strong>Support</strong> components on the other hosts.<br />

7. Customize the DataHub <strong>Support</strong> host. Then test the connection between the<br />

DataHub/2 component on the DataHub/2 workstation and the new DataHub<br />

<strong>Support</strong> components.<br />

Now that you have implemented the easiest function, you could start using the<br />

Copy Data function, which involves two different host RDBMSs:<br />

1. Test the Copy Data function between two RDS hosts (that is, OS/2).<br />

2. Test the Copy Data function between two DRDA hosts.<br />

3. Test the Copy Data function between one DRDA host and an RDS host (that<br />

is, OS/2).<br />

Note: The number of steps involved in using the Copy Data function depends<br />

on your configuration.<br />

Now that you have tested DRDA and DataHub tools conversation connectivity,<br />

you can increase the complexity of your environment by implementing more<br />

security in the system, if your company policies so require.<br />

In this section, we discuss connectivity problems only and not other types of<br />

problems that could occur on the DataHub/2 workstation. You must refer to the<br />

DataHub/2 Installation and Administration Guide (SC26-3043), appendixes D and<br />

E, and the DataHub/2 Message Reference (SC26-3042) for a complete picture of<br />

problem determination.<br />

The tools that are discussed in this section are the:<br />

• DataHub/2 log file


6.2.1 DataHub/2 Log File<br />

• EMQDUMP command<br />

• OS/2 Communications Manager trace utility<br />

• DataHub/2 trace utility.<br />

• DDCS/2 trace utility.<br />

Other tools exist on the DataHub/2 workstation. For example, reports are<br />

produced during DataHub/2 operation and can be used for operations-related<br />

problem determination.<br />

You may also have to use other tools that relate to OS/2 <strong>Database</strong> Manager and<br />

OS/2 base operating system problems. For such problems you should refer to<br />

the DATABASE 2 OS/2 Messages and Problem Determination Guide, which<br />

explains in detail such tools as FFST/2 and OS/2 dumps.<br />

The DataHub/2 log file is certainly the most important tool the DataHub/2 user<br />

has for problem determination. As soon as a problem appears on the DataHub/2<br />

message window, the DataHub/2 user should immediately view the DataHub/2<br />

log file. Why? Let us explain.<br />

Problem determination starts when an error message appears on the DataHub/2<br />

message window. If the message is self-explanatory, you can correct the<br />

problem using the DataHub/2 documentation as a reference.<br />

But many times, the message is a general message like this one:<br />

010 DataHub D:\EMQDIR\EXE\DATAHUB.EXE 1993-03-20-10.49.01.31<br />

EMQ9018I The program, EMQQS.EXE, ended with the return code, 12, for the action,<br />

DISPLAY.<br />

For such messages, you will need to display the DataHub/2 log file. You can find<br />

this file in the directory specified by the CONFIG.SYS SET<br />

EMQDIR= configuration parameter. The directory could contain<br />

many log files, depending on the number of DataHub/2 programs active on your<br />

workstation.<br />

Recommendation<br />

Because you will need to access the log file very frequently, we recommend<br />

that you create an object on the OS/2 desktop that will start an OS/2 editor<br />

session passing the name of the log file as a parameter. This is explained in<br />

4.8, “Recommendations” on page 148.<br />

Recommendation<br />

Because the log file could become very big, we suggest that you rename the<br />

active log file and, if disk space is important, archive the old files for future<br />

reference. The sample messages we are using in this section are taken from<br />

our old log files.<br />

Chapter 6. Problem Determination 205


When you open the log file, the problem you are looking for is stored at the end<br />

of the file. Make sure the problem on which you are working is the “good” one<br />

by verifying the date and time information.<br />

Frequently, the error will create multiple entries in the log file. The first entry is<br />

the most important one. For example, for the general error message presented<br />

above, these were the log file entries:<br />

011 DISPLAY D:\EMQDIR\EXE\EMQQS.EXE 1993-03-20-10.48.40.28<br />

EMQ9251W Exceptional return code, CM_TP_NOT_AVAILABLE_NO_RETRY, received from<br />

communication service, CMRCV.<br />

010 DataHub D:\EMQDIR\EXE\DATAHUB.EXE 1993-03-20-10.49.01.31<br />

EMQ9018I The program, EMQQS.EXE, ended with the return code, 12, for the action,<br />

DISPLAY.<br />

Some problems consist of four entries in the log file. A line with dashes<br />

separates the different DataHub/2 sessions.<br />

Recommendation<br />

206 DataHub Implementation and Connectivity<br />

In a LAN server configuration, the user data, which includes the log file,<br />

should be put on the user′s workstation as explained in 4.4.3, “Installation<br />

Environment Preparation” on page 119. Such placement will prevent having<br />

entries for different users in the same log file.<br />

Now that you know the real source of the problem, the most current problems<br />

should be easy to solve. Just by looking into the log file, we were able to fix<br />

many common user customization and installation problems, such as:<br />

• DataHub/2 database not found or defined<br />

• OS/2 <strong>Database</strong> Manager database alias name not defined (or incorrectly<br />

entered in the DataHub/2 database configuration)<br />

• Improper authority on the DataHub/2 package<br />

• DataHub/2 package not bound to the target RDBMS<br />

• No DataHub/2 user profile entry defined for the target RDBMS<br />

• Lack of resources in OS/2 <strong>Database</strong> Manager, for example:<br />

− Communication heap size (comheapsz)<br />

− RDS heap size (rsheapsz)<br />

− Maximum number of remote connections (numrc)<br />

• Lack of resources in the DataHub/2 database<br />

− Maximum active applications (maxappls).<br />

Many of the messages are OS/2 <strong>Database</strong> Manager messages because<br />

DataHub/2 uses OS/2 <strong>Database</strong> Manager to execute most functions.<br />

A good part of the problem determination solution is to understand the<br />

protocol(s) used by each DataHub/2 function. Refer to 1.2, “DataHub Data Flows:<br />

Overview” on page 6 for the protocols used by the different DataHub/2 functions.<br />

If the function that fails is a display of tables, you will not need to find problems<br />

related to the tools conversation protocol.


6.2.2 EMQDUMP Command<br />

In 1.2, “DataHub Data Flows: Overview” on page 6 you will find a table that<br />

associates the protocol with the DataHub/2 function. All the functions, except<br />

one, use one protocol. Because the copy data function may consist of multiple<br />

steps that use different protocols, you need to know at which step the problem<br />

occurred.<br />

Now, let′s say that a problem is not of the easy-to-determine category. For<br />

example, you will surely have few of the problems shown here:<br />

011 DISPLAY H:\EXE\EMQSGPRO.EXE 1993-03-03-15.53.07.66<br />

SQL30080N A communication error ″000F-00000000″ occurred sending or receiving<br />

data from the remote database. SQLSTATE=58019<br />

This problem will certainly be familiar to you:<br />

011 DISPLAY H:\EXE\EMQSGPRO.EXE 1993-03-03-17.07.37.44<br />

SQL30080N A communication error ″0003-00000004″ occurred sending or receiving<br />

data from the remote database.<br />

The SQL30080N message is the most common communication message. It<br />

occurs when OS/2 <strong>Database</strong> Manager tries to establish a session with another<br />

RDBMS. The good news is that this not a DataHub/2 problem.<br />

For more difficult problems like those listed above, you will need to use different<br />

tools and even consult with other specialists.<br />

The EMQDUMP command (that is, EMQDUMP.CMD) gathers diagnosis and<br />

configuration information from the DataHub/2 workstation and puts it all in one<br />

file. This file can be used for problem determination by your support personnel<br />

or sent to an IBM support service representative.<br />

The EMQDUMP command creates an EMQDUMP.ZIP file containing the following<br />

information:<br />

• CONFIG.SYS file<br />

• DataHub/2 configuration, log, and trace files<br />

• OS/2 Communications Manager configuration files (.NDF, .SEC, .CF2, .CFG)<br />

• OS/2 <strong>Database</strong> Manager configuration information<br />

• DataHub/2 database tables (IXF format)<br />

• Directory listings of DataHub/2 directories<br />

• Install level files (.LVL)<br />

• OS/2 SYSLEVEL information.<br />

For more information regarding use of the EMQDUMP command refer to the<br />

DataHub/2 Installation and Administration Guide.<br />

Chapter 6. Problem Determination 207


6.2.3 OS/2 Communications Manager Trace Utility<br />

The OS/2 Communications Manager trace utility is to be used when an LU 6.2<br />

error appears in the DataHub/2 log file or when you are using OS/2 <strong>Database</strong><br />

Manager directly through the command line processor interface or a program<br />

other than DataHub/2.<br />

The LU 6.2 errors are easy to recognize. First, they appear with the OS/2<br />

<strong>Database</strong> Manager SQL30080 message when there is a DRDA or RDS<br />

connectivity problem. Second, they consist of two parts (for example,<br />

0003-00000004). The first part is the primary return code, and the second part is<br />

the secondary return code.<br />

Recommendation<br />

Before implementing DRDA and DataHub, we recommend that you make sure<br />

you have these books available in your organization to help with problem<br />

determination:<br />

• IBM Extended Services for OS/2 Communications Manager APPC<br />

Programming Reference (S04G-1025) or Communications Manager/2 APPC<br />

Programming Guide and Reference (SC31-6160)<br />

• <strong>Systems</strong> Network Architecture Formats (GA27-3073)<br />

• <strong>Systems</strong> Network Architecture Format and Protocol Reference Manual:<br />

Architecture Logic for LU Type 6.2 (GC30-3269)<br />

The IBM Extended Services for OS/2 Communications Manager APPC<br />

Programming Reference (S04G-1025) contains a chapter that explains what these<br />

primary and secondary return codes mean. If you cannot figure out what the<br />

words means, you should contact your communications specialist for help. This<br />

specialist could ask you to perform a local trace. Another solution is to trace on<br />

the host side (MVS, VM, or AS/400).<br />

To understand the trace, you need to understand the format of the data flowing<br />

through the network. The most important code to find is the sense code that is<br />

sent by the remote Control Point to indicate why the session has not been<br />

established. If you understand the LU 6.2 formats and protocols, you will be able<br />

to do exact problem determination.<br />

To create a trace file from OS/2 Communications Manager, you need to:<br />

• Start the OS/2 Communications Manager trace facility specifying what you<br />

want to trace (for example, IBMTRNET, APPC). The way you start the trace<br />

varies among the different versions of OS/2 Communications Manager.<br />

• Reproduce the error.<br />

• Stop the OS/2 Communications Manager trace facility.<br />

• Format the trace.<br />

208 DataHub Implementation and Connectivity<br />

To start an OS/2 Communications Manager APPC trace, you can use a command<br />

from the OS/2 command line. The following command is the command you use<br />

to trace APPC on the token-ring network:<br />

CMTRACE START /API APPC SERVICES /DATA IBMTRNET /STORAGE 16 /EVENT 1 2 3 4 5 12


6.2.4 DataHub/2 Trace Utility<br />

After the trace is started, you need to reproduce the problem and then issue the<br />

APPNF command to format the trace data that has been stored in<br />

memory (that is, STORAGE 16 (16K)). The APPNF stops the trace<br />

and creates three files with three different suffixes:<br />

.TRC Unformatted trace. Normal OS/2 Communications<br />

Manager trace format. A good knowledge of the LU 6.2<br />

formats and protocols is required to understand the trace.<br />

.DET Formatted trace. This file contains the field names and<br />

their associated values. It is easier to read than the .TRC<br />

file. But a good understanding of the LU 6.2 protocol is<br />

required to understand the field names, their use, and<br />

their possible values. For example, from this file, you can<br />

get the sense code value to explain why you are getting<br />

the famous APPC return code 0003-00000004.<br />

.SUM Summary of the formatted trace. This file contains a<br />

summary of the LU 6.2 session. It does not contain any<br />

detailed information to help you do problem<br />

determination.<br />

It is of no use to do a trace if you do not understand it or if no one in your<br />

organization understands it. The good news is that there is another solution to<br />

APPC traces: talk with your host support personnel and exchange your<br />

definitions. As a DataHub/2 user, you should use your customized .NDF file<br />

which can be found in the :\CMLIB (CM/2) or the<br />

:\CMLIB\APPN (Extended Services for OS/2) directory.<br />

You need to exchange with your communications specialist the DRDA key<br />

connectivity parameters as indicated in Table 3 on page 37 and Table 4 on<br />

page 37, and the DataHub key connectivity parameters as indicated in Table 18<br />

on page 152. Perhaps your definitions match, but the problem is that the partner<br />

logical unit (for example, DB2, SQL/DS AVS machine, OS/2 host) is not<br />

operational. Another possibility could be that you run into a host or NCP<br />

capacity problem if you cannot allocate a session at prime shift when you know<br />

that your definitions are good.<br />

You use the DataHub/2 trace utility to solve connection problems for the<br />

functions that use the tools conversation protocol as indicated in 1.2, “DataHub<br />

Data Flows: Overview” on page 6. For example, the only way to find which<br />

symbolic destination name DataHub/2 passes to OS/2 Communications Manager<br />

is to use the DataHub/2 trace utility.<br />

The DataHub/2 trace utility enables you to trace everything that is happening on<br />

your DataHub/2 workstation. If you activate the DataHub/2 trace with all the<br />

available options, you will end up with a trace file that will be too big and, even<br />

worse, your response time will be significantly degraded.<br />

The DataHub/2 trace facility is available from the Trace... action in the Options<br />

pull-down menu on the DataHub/2 main window.<br />

You should trace only the necessary components. For connectivity problems,<br />

you activate these check boxes from the DataHub/2 Trace Controls window:<br />

Chapter 6. Problem Determination 209


• Perform Trace<br />

− On the PWS platform<br />

− On the PWS tools<br />

− On the host (if necessary)<br />

• Trace Controls for Communications Manager<br />

− Entry/Exit<br />

− Input<br />

− Output<br />

• Trace Controls for Tools conversation flows (internal).<br />

For an understanding of the CPI-C (Common Programming Interface -<br />

Communications) verbs that are used between DataHub/2 and OS/2<br />

Communications Manager, consult the SAA Common Programming Interface<br />

Communications Reference (SC26-4399).<br />

6.2.5 Distributed <strong>Database</strong> Connection Services/2 Trace Utility<br />

All the tools we have discussed up until now are used for SNA connection<br />

problems. Problems also can occur with the DRDA connection, however. The<br />

DDCS/2 SQLJTRC utility provides a record of the data exchanged between the<br />

DDCS/2 gateway workstation and the host RDBMS.<br />

The SAA Distributed <strong>Database</strong> Connection Services/2 Version 2 Guide explains<br />

how to use this utility. You will also need to understand the DRDA verbs and<br />

protocols. Those are explained in these documents:<br />

• IBM Distributed Data Management Architecture Level 3:<br />

(SC21-9526)<br />

Reference<br />

• Distributed Relational <strong>Database</strong> Problem Determination Guide (SC26-4782).<br />

6.3 MVS Host Problem Determination<br />

To help customers and IBM technical professionals find solutions for their<br />

communication problems, this section explains how to analyze problems in the<br />

MVS environment. First it addresses the DataHub <strong>Support</strong>/MVS trace<br />

mechanisms. It suggests an approach to problem diagnosis, reviews the<br />

communication problem types, and explains in general how to deal with each<br />

problem type. It also addresses DB2 and DDF error messages.<br />

6.3.1 DataHub <strong>Support</strong>/MVS Trace Files<br />

210 DataHub Implementation and Connectivity<br />

DataHub <strong>Support</strong>/MVS uses the Tool Dispatcher trace file, which contains<br />

diagnostic information for all activity within the Tool Dispatcher, and trace files,<br />

which contain diagnostic information for all activity for a particular conversation<br />

with a single DataHub/2 workstation<br />

During the DataHub <strong>Support</strong>/MVS customization process you provide information<br />

to set up the default trace options and DASD characteristics on where to put the<br />

trace records when the trace mechanism is activated. Basically two panels are<br />

involved in this setup. Refer to Figure 155 on page 211 and Figure 156 on<br />

page 211 where you can see the values we supplied during customization.


� �<br />

DataHub <strong>Support</strong>/MVS Release 1<br />

CONFIGURE TOOL DISPATCHER REQUIRED PARAMETERS<br />

===><br />

Enter data below:<br />

Started Task ===> EMQMAIN_ Name of procedure member<br />

Steplib ===> DATAHUB.V110.SEMQMLIB_______________________<br />

STEPLIB for DataHub<br />

<strong>Support</strong>/MVS load modules<br />

Trace Flags ===> 00110101101011010001 DataHub <strong>Support</strong>/MVS traceflags<br />

Application ===> EMQMACB1 VTAM application id<br />

Mode Name ===> EMQMLOGM VTAM logmode name<br />

Maximum Users ===> 25_ Maximum concurrent users<br />

JES Name ===> JES2 JES subsystem name<br />

JES Character ===> $ JES command character<br />

� �<br />

Figure 155. DataHub <strong>Support</strong>/MVS Trace Parameter Specification<br />

The default trace flags are “00110101101011010001,” where 1 indicates “on” and<br />

0 indicates “off.” Refer to Chapter 4 of the DataHub <strong>Support</strong>/MVS Installation and<br />

Operations Guide for more details on the meaning of the trace flags.<br />

The file where the traces are recorded as well as the interval of the conversation<br />

trace are set up in the customization panel shown in Figure 156. In our case<br />

defaults are used for most of the parameters.<br />

� �<br />

DataHub <strong>Support</strong>/MVS Release 1<br />

CONFIGURE TOOL DISPATCHER OPTIONAL PARAMETERS<br />

===><br />

Enter data below:<br />

Conversation Trace Datasets:<br />

Device Type ===> SYSDA___<br />

Volume Serial 1 ===> STDB2A Volume Serial 2 ===> ______<br />

Space Units ===> ____ CYLS, TRKS, or BLKS<br />

Primary ===> _____ In above units<br />

Secondary ===> _____ In above units<br />

Block Size ===> _____ Multiple of 80<br />

Retention Period===> ____ Days to retain the trace data set<br />

Refresh rate in seconds:<br />

Tools ===> ____ Monitor active tools<br />

Conversations ===> ____ Monitor active conversations<br />

� �<br />

Figure 156. DataHub <strong>Support</strong>/MVS DASD Trace Data Set Definition<br />

Chapter 6. Problem Determination 211


A DataHub user can request tracing to be done for both the DataHub<br />

<strong>Support</strong>/MVS platform feature and tools feature. The Tool Dispatcher trace is<br />

initiated by the MVS operator. Tracing to the conversation′s trace file is initiated<br />

by the DataHub/2 workstation user. Refer to “Tracing on DataHub <strong>Support</strong>/MVS”<br />

in the DataHub <strong>Support</strong>/MVS Installation and Operations Guide for details on how<br />

to activate and deactivate the tracing facility at the Tool Dispatcher when it is<br />

started as a started task or a batch job. Also refer to DataHub <strong>Support</strong>/MVS<br />

Installation and Operations Guide, Appendix B, for the initialization parameter<br />

defaults.<br />

The Tool Dispatcher trace file name has the format “xxx.Annn.TDTRACE,” where<br />

in our environment it is “DATAHUB.V110.TDTRACE” (see Figure 157).<br />

� �<br />

EDIT ---- DATAHUB.V110.A006.TDTRACE -------------------------- COLUMNS 00<br />

COMMAND ===> SCROLL ===><br />

****** ***************************** TOP OF DATA ************************<br />

000001 4 1993-03-18-15.35.34.336129 EMQMCR01 CS IE<br />

000002 5(01000020)(000E) 49 - åáXà‰à•àjä•äåäbä„àãá‘àlàià„à«ànà{àòàgàuà•<br />

000003 6on file ′ DATAHUB.V110.STDB2C.J31982.RESULT′ .<br />

000004 4 1993-03-18-15.35.35.577628 EMQMCR01 CS IE<br />

000005 5(01000020)(000E) 46 - ä•äåäbä„à㢰ñ_àòàgàuà•à“ä - remove on fi<br />

000006 610.STDB2C.J31982.RESULT′ .<br />

000007 4 1993-03-18-15.35.35.842220 EMQMCR01 CS IE<br />

000008 6(01000020)(000E) Function EMQMCR01: Returned from function sSendE<br />

000009 4 1993-03-21-07.26.31.595502 EMQMCC01 CS IE<br />

000010 5(00000000)(0000) Data received from conversation not allocated, V<br />

000011 60, Conv key= ????<br />

****** **************************** BOTTOM OF DATA **********************<br />

� �<br />

Figure 157. DataHub <strong>Support</strong>/MVS DATAHUB.V110.A006.TDTRACE Data Set<br />

Refer to the DataHub <strong>Support</strong>/MVS Installation and Operations Guide for the<br />

trace file formats.<br />

The trace files, which contain diagnostic information about all activity for a<br />

particular conversation with a single DataHub/2 workstation, look like the file<br />

shown in Figure 158.<br />

� �<br />

EDIT ---- DATAHUB.V110.STDB2C.A0060002.TRACE ----------------- COLUMNS 00<br />

COMMAND ===> SCROLL ===><br />

****** ***************************** TOP OF DATA ************************<br />

000001 11993-03-18-15.35.35.095715Error Log<br />

000002 4MNYCENT 1993-03-18-15.35.35.137122 EMQMCR01 CS IE<br />

000003 549 - åáXà‰à•àjä•äåäbä„àãá‘àlàià„à«ànà{àòàgàuà•à“ä - fopen on<br />

000004 6V110.STDB2C.J31982.RESULT′ .<br />

****** **************************** BOTTOM OF DATA **********************<br />

� �<br />

Figure 158. DataHub <strong>Support</strong>/MVS DATAHUB.V110.STDB2C.A0060002.TRACE Data Set<br />

6.3.2 An Approach to Problem Diagnosis<br />

212 DataHub Implementation and Connectivity<br />

In any error situation, it is important to understand the circumstances under<br />

which a problem is occurring, so that information pertinent to the problem can<br />

be gathered.


For example, a user has reported receiving an SNA sense 084E0000 when trying<br />

to access data at a remote site. The sense code in the manual suggests that<br />

invalid session parameters are being sent. This information indicates that the<br />

session is probably just being established, and that the BIND parameters may<br />

not be acceptable to one of the LUs. The BIND parameters are taken from the<br />

LOGMODE entry used for the session.<br />

With this in mind, to diagnose the problem, we need more information on the<br />

circumstances surrounding the problem. For example, you might want to use a<br />

display command to check whether there are any sessions with this remote site.<br />

If there are, were they established using the same LOGMODE entry as that being<br />

used by the failing session attempt? Has this LOGMODE ever successfully<br />

established a session, or is it only newly installed? Have any changes been<br />

made recently to the LOGMODE table? (For example, has the table been<br />

reloaded using a MODIFY command just prior to the start of the problem?)<br />

Usually, the LOGMODE will be newly installed or recently changed.<br />

At this stage, you might want to check any changes. If this does not help<br />

determine the problem, the next step is to take a VTAM buffer trace of the<br />

session setup and find the PIU that caused the bad sense. In this set of<br />

circumstances, it will probably be a BIND PIU. Check which LU rejected the bind;<br />

that is, which LU sent the negative response (with the sense).<br />

Remember that the bind flows first from PLU to SLU, and that a BIND response<br />

then flows from SLU to PLU. This BIND response may contain parameters that<br />

have been altered by the SLU. From the VTAM buffer trace, using the <strong>Systems</strong><br />

Network Architecture Formats (GA27-3136), you can work out each parameter<br />

and probably determine which parameter is causing the BIND to be rejected. If<br />

the problem still has not been found, comparing the failing parameters with<br />

those on a working session establishment between two DDF LUs may be of help.<br />

In this example, we started with an SNA sense code. Checking the sense code in<br />

the manual put us in the session setup area. We knew how the session setup<br />

should occur, so we could look for changes or problems in that particular area.<br />

We did not, for example, look into buffer usage, whether VTAM CPU utilization<br />

was high, and so forth. These points were extraneous and could have further<br />

complicated the problem during the diagnosis.<br />

Often, the key to a problem is found by looking up the codes received in the<br />

relevant manual. Too often, a lot of time is wasted by not really understanding<br />

the problem. A rational approach to understanding the problem and the<br />

situation that caused the problem may resolve the problem quickly and prevent<br />

the same problem from reoccurring.<br />

6.3.3 Communication Problem Types<br />

The VTAM Diagnosis manual has a chapter on the procedures to be used when<br />

diagnosing each particular problem type. Please refer to that chapter in<br />

conjunction with reading this section.<br />

Chapter 6. Problem Determination 213


Abends<br />

All programs may abnormally end (abend) at one time or another. Most<br />

applications have some sort of recovery processing for avoiding actual<br />

termination of the application. VTAM is fairly typical in this respect and usually<br />

attempts to recover before terminating.<br />

VTAM abends fall into two categories: those where the abend occurs in the<br />

VTAM address space, and those where the abend occurs in the VTAM code in a<br />

user (for example, DDF) address space.<br />

The link pack area (LPA) contains a lot of VTAM code that is accessible by all<br />

address spaces. Remember that VTAM has many of its control blocks in the<br />

common service area (CSA). These control blocks are also accessible by all<br />

address spaces. In this way, each address space can do a certain amount of<br />

VTAM processing in its own address space. In general, this is I/O processing:<br />

sending and receiving data across the application interface. Therefore the DDF<br />

address space may abend in a VTAM module and cause a dump of the DDF<br />

address space to be taken.<br />

VTAM has several MVS abend codes that are reserved for VTAM′s use only.<br />

These are ABEND0A7, 0A8, 0A9, 0AA, 0AB, 0AC and 0AD. The most commonly<br />

occurring of these abends is ABEND0A9. The value in general purpose register<br />

15 (GPR15) gives some indication of the problem.<br />

Sometimes the ABEND0A9 is preceded by an ABEND0C4. Always look for an<br />

indication of a previous error. This could be another abend, or an error<br />

message to the user or the console. It is not necessarily a VTAM message and<br />

may not occur immediately before the abend. LOGREC is a good source for<br />

finding abends that may have been missed on the console by the operators. It is<br />

important to start diagnosing a problem when the first sign of trouble occurs, as<br />

later problems are often part of the aftermath of the original problem. Recovery<br />

routines will normally retry the failing process before eventually terminating.<br />

If there is no other indication of a problem, check whether a dump has been<br />

taken. For an ABEND0C4 or ABEND0A9, usually an SVC dump of the address<br />

space will be taken. In general, for complete diagnosis of a VTAM problem, the<br />

CSA needs to be dumped.<br />

From the dump, you should be able to determine the location of the failure. This<br />

is often included in a dump summary, formatted at the top of the dump. Using<br />

this information you can check in INFOSYS (if INFOSYS is available at your site),<br />

for known problems in this area. If INFOSYS is not available or a fix cannot be<br />

found, you should call the IBM <strong>Support</strong> Center with the following information to<br />

find an answer to the problem:<br />

• Abend code<br />

• Failing module name<br />

214 DataHub Implementation and Connectivity<br />

This information can be gathered from the dump, or sometimes from the<br />

console log messages issued at abend time. If the module name is not<br />

included in the dump summary information, locate the PSW address (second<br />

word of the PSW) in the dumped storage. Work backward through the dump<br />

(toward zero) until an eyecatcher is encountered. The eyecatcher includes<br />

the module name. All VTAM module names start with IST. (The eyecatcher<br />

is alphanumeric data usually written at the front of the module. It can be<br />

read from the EBCDIC conversion area on the right-hand side of the dump.


For example, in the dump, the module name could be<br />

x′C9E2E3D6D9C6C2C1′. In the EBCDIC conversion area, you would see<br />

′ISTORFBA′.)<br />

• Offset of PSW into this module<br />

Usually, just before the eyecatcher is an x′47....′ instruction, which branches<br />

around the eyecatcher and into the module code. This instruction is the start<br />

of the module. Subtract the address of this instruction from the PSW address<br />

to determine the offset. Remember that this is in hex, not decimal. If the<br />

module is in the LPA, an LPA map can be generated. The LPA map<br />

indicates the start of the module. Work out the offset using this start<br />

address.<br />

• Latest maintenance to hit this module<br />

This information can be gathered either from the dump or by using SMP/E.<br />

Again, find the eyecatcher, which will normally include an assembly date and<br />

the latest PTF to be applied to the module. Write down both the assembly<br />

date and the PTF number.<br />

• Registers at the time of the abend<br />

This information can be taken from either the console log or the dump.<br />

Register 15 sometimes contains a return code that will assist in problem<br />

diagnosis; for example, on an ABEND0A9.<br />

• Recent maintenance applied<br />

PTFs or APARs that have been recently applied may have contributed to the<br />

circumstances that caused the problem.<br />

• Recent changes to the system<br />

In general, problems occur only after changes have been made to a system.<br />

Keep an open mind when looking at recent changes. Even changes that<br />

appear totally unconnected may be the cause of the problem.<br />

• Frequency of the abend<br />

How often the abend has occurred can give some idea of the magnitude of<br />

the problem. If the abend occurs frequently, something fairly fundamental is<br />

wrong. If it is a one-time occurrence, an obscure set of circumstances,<br />

perhaps timing related, may have caused the problem.<br />

The IBM <strong>Support</strong> Center personnel may request additional information. The<br />

console log around the time of the abend and the dump should be kept, in case<br />

further diagnosis of the problem requires more information from the dump.<br />

Hang Situations<br />

When the problem is a hang, it can be very difficult to determine its cause.<br />

Always look for any indication of a problem on the console log. If the log<br />

contains nothing of immediate interest, try to determine the extent of the hang.<br />

Try to put the users that are hung into groups with common characteristics:<br />

• Are only users of one particular application hung?<br />

For example, local DB2 (on DBD1) requests are working, but requests to<br />

another remote DB2 (DBD2) are hanging. Some areas that could be<br />

investigated include:<br />

− What is the status of the remote application LU (LUDBD2)?<br />

Chapter 6. Problem Determination 215


216 DataHub Implementation and Connectivity<br />

A D NET,ID=,SCOPE=ALL command issued on the host<br />

owning the application indicates whether the application is active and<br />

has any sessions with the local DDF. If the status is not ACTIV, check<br />

the meaning of the status in the manual and take appropriate action.<br />

The same command issued on the local host displays the CDRSC status.<br />

Make sure this is ACTIV (or ACT/S) also.<br />

− Are sessions already set up?<br />

The command given above indicates whether there are active sessions.<br />

Three sessions are required for system use: one with a LOGMODE of<br />

SNASVCMG, and two with SYSTOSYS LOGMODE. If there are no<br />

sessions, check the virtual route between the subareas.<br />

− Are the sessions working?<br />

By repeating the displays, check the send and receive counts on the user<br />

sessions. Also, a DISPLAY THREAD(*) DETAIL command shows whether<br />

there is a conversation on each session. The user may be hung waiting<br />

for a conversation to end. When a session becomes available for a new<br />

conversation, the user′s remote database access is processed.<br />

− Are local requests on the remote host (DBD2) working?<br />

If not, this would appear to be a problem on the other host. Look for an<br />

abend, loop, or wait problem. Check the remote host console for any<br />

relevant messages. Check whether DB2 commands on the remote host<br />

(DBD2) work.<br />

For example, if no commands work, this could mean a problem in DB2.<br />

If most commands work, but a cancel thread command does not, this<br />

may indicate a problem in either the DDF address space or VTAM.<br />

If local requests are working, the problem would appear to be in either<br />

the network or the DDF address space.<br />

− Is the virtual route between the two subareas open and active?<br />

Use a D NET,ROUTE command to see whether the virtual route is<br />

operational. If the virtual route is blocked, there could be a host or an<br />

NCP storage shortage problem somewhere in the network.<br />

− Is any network traffic flowing between the two hosts?<br />

Check whether there are any other network users hung. If there are, this<br />

hung is most likely a network problem.<br />

− Is the CDRM to CDRM session between the two hosts active?<br />

A D NET,ID= command can be used to display the<br />

status of the CDRMs, both of which should be ACTIV.<br />

• Are all the hung users in one particular part of the network?<br />

For example, all hung users may have terminals on one controller. In this<br />

case, the problem could be the controller. Display the status of the<br />

controller, for it may need to be recycled. Another example: all the hung<br />

users are accessing the network through one particular link, and the link has<br />

many errors. Slow responsetimes could therefore result because of error<br />

recovery and retries. Investigate the link problem.<br />

When a device or line connected to an NCP becomes inoperative (INOP),<br />

NCP generates a Miscellaneous Data Record (MDR). MDRs are sent to the<br />

owning SSCP. They are found in LOGREC and can be viewed by requesting


an EREP report. The MDRs can also be seen online using the NetView<br />

Hardware Monitor (or an equivalent) product. These records contain<br />

information on why the resource became INOP. They are very useful for<br />

problem determination.<br />

• Are all users that are in session working and only those that are causing a<br />

session to be established hung?<br />

In this circumstance, the problem could be a VTAM problem. Remember<br />

that most of the send and receive data processing is done in the user′s<br />

address space. All session establishment is done in VTAM′s address space.<br />

Check the following areas:<br />

− VTAM commands<br />

Do VTAM commands work? If they do, VTAM is working. If they do not,<br />

VTAM may be in a wait (unlikely) or loop (more likely) or perhaps is<br />

unable to get a share of the CPU.<br />

− CPU usage<br />

Does any particular address space have very high CPU utilization? If so,<br />

monitor this, as the address space could be in a loop.<br />

− VTAM paging<br />

Is VTAM doing a large amount of paging? If so, what looks like a loop in<br />

VTAM could be VTAM running a very long chain of control blocks.<br />

• Are all users hung?<br />

If so, this is could be a more fundamental problem with the operating<br />

system. Check whether any MVS commands are working.<br />

In any hang situation, the results (or lack of results) from a variety of displays<br />

can give a clear picture of the scope of the problem. The more information that<br />

is available, the easier the problem diagnosis is, and, usually, the faster the<br />

resolution will be. It is a discouraging and time-consuming task to try to find a<br />

problem in a VTAM dump when all you know is that there is a hang.<br />

If the hang only occurs on a session when one particular request is made, often<br />

a VTAM buffer trace can be used to see the last PIUs flowing on the session.<br />

These PIUs can often hold the key to the problem.<br />

Loops<br />

When a loop is suspected, a variety of actions can be taken:<br />

• CPU usage<br />

Usually a loop can be readily found by displaying CPU usage. Of course, it<br />

is necessary to have some idea of “normal” CPU usage for the suspected<br />

address space, for comparison purposes. In general, any address space<br />

using an unusually high amount of CPU should be suspect.<br />

• Paging rates<br />

The paging rates can be monitored to check for any abnormally high rates.<br />

Of course, “normal” usuage is again needed for comparison purposes.<br />

• Loop recording<br />

The 3090 and 308X CPUs have a hardware facility to record up to 490 PSWs.<br />

This facility is activated from the hardware console and can be used to trace<br />

a loop. The output is dumped when an SVC or stand-alone dump is taken.<br />

Chapter 6. Problem Determination 217


This can be very helpful when debugging a loop problem. For more<br />

information on this facility, consult the documentation for your particular<br />

CPU.<br />

• VTAM internal trace<br />

If the VTAM internal trace (VIT) is active, this can give some insight into the<br />

problem. If the loop causes entries to be written to the VIT, the trace table<br />

will probably wrap before the operators have determined that VTAM is in a<br />

loop. However, if the loop does not cause trace entries to be written, the VIT<br />

can be invaluable in determining the event that initiated the loop.<br />

• Dumps<br />

When a loop is encountered, not much can be done to recover the situation.<br />

Usually, a dump will have to be taken and the system restarted. Dumps can<br />

be taken in any of the following ways:<br />

− MVS dump command<br />

If the CPU has multiple processors, MVS commands may still be working.<br />

If so, requesting a dump of the looping address space may be sufficient<br />

to obtain a dump.<br />

− Restart dump<br />

If the CPU has only a single processor, a restart dump can be taken.<br />

− Standalone dump<br />

If neither of the above dumping methods is appropriate, a standalone<br />

dump can be taken. Normally, this is the least efficient choice, as the<br />

entire system is taken down, and an IPL is required to recover.<br />

With a multiprocessor CPU a loop can appear as degraded performance. For<br />

example, if the CPU has four processors and one is processing the loop, this<br />

leaves three CPUs for normal processing. This reduction in processing power<br />

will normally cause some performance degradation. Any users associated with<br />

the looping address space will normally be hung.<br />

Before calling the IBM <strong>Support</strong> Center, try to have the following information<br />

available:<br />

• Looping modules<br />

Try to determine the modules involved in the loop using VTAM Diagnosis<br />

(LY30-5601).<br />

• Maintenance level of the module(s)<br />

Gather this information from SMP/E.<br />

• Messages<br />

Look for any messages on the console that may have triggered the loop. If<br />

one message is being issued repeatedly, include it in the description of the<br />

loop.<br />

• Trace output<br />

Have available the output from any traces taken, together with any<br />

information you have obtained from the trace.<br />

• Dump<br />

218 DataHub Implementation and Connectivity


• Console log<br />

Have these available in case further diagnosis is required.<br />

Storage Shortage<br />

Storage shortages can manifest themselves in any of the following ways:<br />

• VTAM messages<br />

A VTAM message indicating a storage shortage may be received at the<br />

console. This message normally indicates the MVS subpool where the<br />

shortage has occurred. Using this subpool number, the area of storage<br />

experiencing the shortage can be determined. For example, subpool 231<br />

(SP231) is CSA, while subpool 17 (SP17) is in the address space private area.<br />

The most common storage shortage problems involve CSA. VTAM uses<br />

large amounts of CSA for its control blocks and buffers containing data that<br />

is being sent around the network. This problem can occur for a variety of<br />

reasons. For example, if one application is flooding another with data, and<br />

the second application is unable to receive the data and process it at an<br />

adequate rate, the buffers build up in VTAM storage. This situation can be<br />

avoided by using session pacing.<br />

If VTAM is unable to get enough storage to issue a message, the normal<br />

storage shortage message is accompanied by an IST999E message.<br />

• RCPRI and RCSEC<br />

The primary return code (RCPRI) and secondary return code (RCSEC) are<br />

both given to DDF when an APPCCMD macro has completed. On occasion,<br />

these codes may indicate that there is a storage shortage in VTAM.<br />

For example, if RCPRI is x′0084′ and RCSEC is x′0000′, there was a storage<br />

shortage while VTAM was receiving data or sending a pacing response. An<br />

RCPRI of x′0098′ with RCSEC of x′0000′ indicates there is a temporary<br />

storage shortage while sending data. Usually this return code means that<br />

the send request has temporarily depleted the buffer pool to such an extent<br />

that the pool must be expanded. The expansion had not occurred before the<br />

completion of the APPCCMD macro.<br />

• SNA sense code<br />

Some SNA sense codes indicate there may be a storage shortage. For<br />

example, a user may be accessing data from the remote database and<br />

receive ′084C0000′. Checking this sense code in the manual tells us that<br />

there is a permanent insufficient resource condition. This resource could be<br />

storage. Other sense codes, such as ′800A000′, indicate a storage type<br />

problem, but not necessarily a storage shortage.<br />

• Abends<br />

Several MVS abends (for example, ABEND878 and ABEND80A) indicate<br />

storage shortage problems. These are uncommon in VTAM.<br />

• Hangs<br />

Depending on the storage shortage and the processing that is occurring, the<br />

storage shortage could manifest itself in a hang situation. For example, if a<br />

virtual route becomes blocked because of storage shortages, all the sessions<br />

that had been using that session will hang until the storage shortage is<br />

relieved and the virtual route reopens.<br />

Chapter 6. Problem Determination 219


When you receive any of these indications of a storage shortage problem you<br />

can use the following steps to find out more information about the shortage:<br />

• Determine the area of storage shortage<br />

This is important, as the VTAM display command has information about the<br />

VTAM CSA usage. This will not help diagnosis if the storage shortage is in<br />

VTAM private. However, CSA usage should probably be checked anyway.<br />

The area can be determined by checking the MVS subpool number (for<br />

example, as given in a message). The following subpools are in the CSA<br />

area: SP227, SP228, SP231, SP239, and SP241.<br />

• Display buffer usage<br />

If the storage shortage has occurred in one of the CSA subpools, a D<br />

NET,BFRUSE command can be used to determine the amount of CSA storage<br />

VTAM is using.<br />

Within this display, look for VTAM pools that have been expanded many<br />

times. Usually the buffer use display gives a good idea of which VTAM pool<br />

is causing the storage shortage.<br />

• Monitor buffer usage<br />

When looking at a buffer shortage, it is often helpful to know whether the<br />

onset of the problem was gradual or immediate. If regular buffer usage<br />

displays are done, gradual increases in buffer use can be seen. These<br />

gradual increases may take days to manifest themselves as storage<br />

shortage problems. In fact, if VTAM is taken down regularly, the storage<br />

shortage symptom may never be seen.<br />

Another benefit of regular monitoring of the buffers is that, when a problem<br />

does occur, “normal” buffer usage for that host is known, so the buffer<br />

values can be compared.<br />

If the onset of the buffer shortage is very fast, check the system console (or<br />

log), looking for some event that has triggered the problem. The event could<br />

be virtually anything. For example, an NCP has a problem and large<br />

amounts of data that were heading out onto the network are now caught in<br />

VTAM while recovery of the NCP is attempted.<br />

• Take a dump<br />

To determine the cause of the problem, a dump is usually necessary. When<br />

taking the dump, be sure to include CSA in the dumping options. Without<br />

CSA the dump is almost useless. The VTAM Diagnosis manual has a<br />

discussion on how to find each of the VTAM pools and which control blocks<br />

are allocated in each pool.<br />

• Traces<br />

220 DataHub Implementation and Connectivity<br />

The SMS option on the VTAM Internal Trace can be very helpful when<br />

looking at storage-related problems. When used in conjunction with other<br />

options (PSS, SSCP, and PIU, for example), the SMS option can provide<br />

insight into the processing that is causing the storage shortage.<br />

The SMS storage trace can be used to monitor the buffer pool usage if<br />

regular displays are inconvenient.


Storage shortage problems fall into the following main categories:<br />

• No session pacing<br />

In this case, the VTAM IO buffers (IOBUFs) are being flooded by one<br />

application. Finding the IOBUFs in the dump can often lead to the<br />

application at fault. Usually the problem will have started after the<br />

introduction of (or changes to) an application. Look for any changes in the<br />

system that could have caused the problem.<br />

• Control blocks not freed<br />

There have been problems in the past where control blocks were not freed<br />

when they were no longer needed. These control blocks gradually fill large<br />

amounts of CSA. This type of problem can normally be found with a search<br />

in INFOSYS, if this is available at the site, or RETAIN, if a call is placed with<br />

the IBM <strong>Support</strong> Center. If a known problem is not found, the IBM <strong>Support</strong><br />

Center should be contacted for further problem diagnosis.<br />

• Buffer pool unable to expand<br />

If the buffer pool has been scheduled for expansion by VTAM, but for some<br />

reason is unable to expand, a storage shortage error may be received. This<br />

could be a transient condition while the buffer pool is being expanded.<br />

If this is not a transient condition, check that the pool is eligible for<br />

expansion; that is, an expansion limit and expansion number have been<br />

assigned to the buffer pool. If dynamic expansion has not been allowed, the<br />

pool may have used all of its base allocation. In this case, the base<br />

allocation should be increased.<br />

Diagnosing storage shortage problems can be quite difficult. However, in<br />

general, with a dump and an idea of where the shortage occurs, the problem<br />

source can be identified.<br />

When contacting the IBM <strong>Support</strong> Center, have available as much relevant<br />

information as possible. Remember to check for any changes that have been<br />

made at the site. Many applications use VTAM′s services. Changes made for<br />

one application may adversely affect another application.<br />

Incorrect Output<br />

Incorrect output problems can take a variety of forms. In general, the problems<br />

can be grouped into two categories. In the first category, the remote access<br />

request fails and an SQL return code and SNA sense code are received. In the<br />

second category the request works, but the data received is inconsistent.<br />

Request fails with return code: When a remote database access fails, normally<br />

the user receives a DB2 DDF message. The message and error code should be<br />

investigated and, depending on the type of error, the appropriate action taken.<br />

For example, a user receives an SNA sense code of x′800A0000′. Checking this<br />

sense code in the SNA Formats manual tells us that either the PIU was too long<br />

or the buffering available for the PIU was insufficient.<br />

To diagnose the problem, it would be easiest to look at the PIUs to check that<br />

the length is not over the RUSIZE maximum that we have defined in the<br />

MODETAB entry for this session. Remember we defined an RUSIZE of 4K. To<br />

do this, we should take a VTAM buffer trace of the attempted remote access.<br />

This is done by starting the trace with ID=. In a DB2 to DB2<br />

environment, we have the option of tracing eihter (or both) of the DB2 systems:<br />

Chapter 6. Problem Determination 221


222 DataHub Implementation and Connectivity<br />

that is, the local and/or remote DB2 LU. If the remote DB2 system is traced, we<br />

see the PIUs (and the lengths of these PIUs) that the remote DB2 is sending.<br />

Because large amounts of data will be coming from the remote host, these large<br />

PIUs are more likely to be too long than the smaller requests from the local DB2.<br />

If the local DB2 is traced, we should see the large PIUs arriving from the remote<br />

DB2. In this trace we would look for a PIU that has the sense included as part of<br />

the RU.<br />

In fact, in this example, the trace of the remote LU did not show the sense code.<br />

The PIUs leaving the remote DB2 had a length in the TH of x′1003′. This length<br />

includes the 4K x′1000′ data RU and three bytes of RH. So, the PIU had a total<br />

length of x′101D′ including the TH. This should be acceptable.<br />

However, the trace of the local DB2 LU shows each of these large PIUs with the<br />

800A0000 sense. So, the PIUs have left the remote DB2 normally, but<br />

somewhere along the path between the two hosts, the buffering available for the<br />

PIU was insufficient.<br />

Now that we know a bit about the problem, a search in INFOSYS (or a call to the<br />

IBM <strong>Support</strong> Center for a RETAIN search) uncovers an informational APAR that<br />

indicates that the MAXBFRU specifications should be checked on any links<br />

between the two hosts. (MAXBFRU is the maximum amount of storage available<br />

for buffering a PIU destined to flow over that link.) In our case, the MAXBFRU<br />

specification on our CTC was allowed to default to one 4K page. Our PIU was 4K<br />

plus the TH and RH, so there was insufficient buffering available for our PIU. It<br />

was truncated and given the ′800A0000′ sense code, which was then returned to<br />

the user. Increasing the MAXBFRU definition resolved the problem.<br />

Often a search in INFOSYS or a RETAIN search will reveal information on the<br />

effect of particular VTAM definitions. This will often lead to a resolution, if not<br />

directly, then indirectly, by indicating new areas that can be checked for errors.<br />

Of course, each problem must be treated individually, depending on the<br />

information received from the sense codes. Often a VTAM buffer trace will help<br />

in understanding a problem that results in a SNA sense code. From the trace,<br />

you can see exactly what is happening on the session and the sequence of<br />

events leading up to the issuance of the sense.<br />

Inconsistent Data: Under normal circumstances, the network will not cause<br />

inconsistent data. Remember that if there is a network problem, all the network<br />

users are likely to be affected, not just one application. So, inconsistent data is<br />

more likely to be a problem within the database.<br />

For example, if a user does a remote update and the commit fails to complete,<br />

the data on the database could be updated or not, depending on how far the<br />

commit progressed. A DB2 code, for example, x′00D300FE′, may be received<br />

when a DDF thread is cancelled. In these circumstances a local request for the<br />

“updated” data should be made to check the actual state of the data so that<br />

users do not receive inconsistent data.


6.3.4 Being Prepared<br />

Unexpected Messages<br />

Sometimes the first indication of a problem is a message received at the<br />

console. In general, the meaning of the message should be reviewed in the<br />

VTAM Messages and Codes or IBM DATABASE 2 Version 2 Messages and Codes<br />

manual and the appropriate action taken.<br />

For example, message DSNL013I VTAM OPEN ACB FAILED ERROR=90 is<br />

received. This message indicates that a problem was encountered while DDF<br />

was trying to open its ACB. The open ACB error code should be checked in<br />

VTAM Programming manual. The error codes are listed in the section describing<br />

the OPEN macro.<br />

In this case, an error code of 90 means that VTAM could not find a resource in<br />

the VTAM definitions that matches the name in the ACB′s APPLID field. This is<br />

probably because the application major node is not active.<br />

Often an open ACB error will be attributable to the ACB not closing properly<br />

when the application was last inactivated. The console log should be checked<br />

around the time of the last inactivation for any sign of an error.<br />

In an environment where software is constantly being upgraded and changed,<br />

problems are virtually unavoidable. Spending some time in preparation for<br />

those problems can sometimes decrease the impact and resolution time for the<br />

problem.<br />

For example, if the problem is well documented the first time it occurs, you may<br />

have enough information to determine the cause of the problem and thus avoid<br />

having to re-create the problem to gather additional documentation. If you do<br />

have to re-create the problem, getting all the required information at that time<br />

will prevent additional re-creation attempts to get even more information.<br />

Re-creating problems, if such an approach is at all viable, can be very<br />

time-consuming and should be kept to a minimum.<br />

With this in mind, we suggest that you check the following items at your site:<br />

• Number of dump data sets<br />

Make sure an adequate number of data sets are available. These data sets<br />

should be checked regularly for dumps and cleared out. The dumps should<br />

not be deleted without first diagnosing the problem.<br />

In some circumstances, a dump is essential to further problem diagnosis.<br />

Without a dump, diagnosis stops until the problem occurs again. A dump<br />

lost because of full dump data sets can cause an extra outage that could<br />

have been avoided.<br />

• Dump data sets large enough<br />

Ensure that the dump data sets are large enough to take a whole dump.<br />

Partial dumps may not have the required information in them and may thus<br />

necessitate a reoccurrence to get good documentation.<br />

• CSA included in dump options<br />

VTAM has a large number of control blocks in CSA. Whenever a dump is<br />

taken of VTAM, or for a VTAM-related problem, it is essential that CSA be<br />

dumped. Often, without CSA, the problem cannot be determined, and a<br />

Chapter 6. Problem Determination 223


eoccurrence is needed to get more information. CSA should be specified<br />

on the default options for MVS SVC dumps.<br />

• VTAM Internal Trace<br />

The default for the VIT is no tracing options with two pages running<br />

internally. This will log any errors that occur in the system, but without a<br />

time stamp these errors cannot be related to outside symptoms. (Remember,<br />

only the VIT running externally to GTF will have a time stamp.)<br />

In general, if the internal table can be increased to, for example, 50 pages,<br />

and some options are included, the table can be of more use for general<br />

diagnosis. This is merely a precaution, so selection of the VIT options to run<br />

is arbitrary. The SSCP, PSS, MSG, and PIU options would give some idea of<br />

the processing being done by VTAM prior to the dump being taken. Of<br />

course, while actively pursuing a problem, specific options will be required,<br />

and the trace table should be further increased in size. The VIT should not<br />

cause noticeable degradation in performance, although, of course, it does<br />

involve some overhead in CPU utilization. Some options cause more trace<br />

records to be written than others (LOCK, for example).<br />

• GTF<br />

Make sure that the GTF procedure is in place and that the data set collecting<br />

the trace records is large enough. The data set will wrap if it is not large<br />

enough, and important information could be lost.<br />

• Traces<br />

It is important to know how to take traces and format records from the GTF<br />

trace data set. If there is a problem in this area, it should be fixed as soon<br />

as possible. For example, if you have a very severe problem that requires a<br />

trace, it is important that the trace be produced quickly and efficiently. This<br />

is not the time to find out you have a problem in the trace setup—the<br />

resolution of the original problem will only be delayed. Jobs to format the<br />

trace records should be readily available.<br />

• Stand-alone dump<br />

The stand-alone dump facility should be ready and working. If a severe<br />

problem is encountered, often the only way to get a dump is to take a<br />

stand-alone dump.<br />

• Manuals<br />

Manuals containing diagnostic information should be readily available.<br />

Without them, diagnosing the problem may be impossible.<br />

• INFOSYS<br />

INFOSYS and other problem-oriented databases can be invaluable when<br />

trying to determine the cause of a problem. They contain the information on<br />

known problems in the code, as well tips and hints on definitions (and<br />

common problems occurring because of incorrect definitions) and tuning.<br />

• Problem tracking<br />

224 DataHub Implementation and Connectivity<br />

The site should have a system set up for tracking problems. This can be on<br />

paper or on the system. For an ongoing problem, recording each<br />

occurrence, with the available documentation, aids in a clear understanding<br />

of the progress of the problem. In this way, changes in the state of the<br />

problem can be identified, and probable causes for these changes<br />

investigated.


For example, a user has a hang problem. The problem was thought to be<br />

described by an APAR. The PTF to fix the problem described by the APAR<br />

was applied. After this, an intermittent ABEND0C4 occurs. By being able to<br />

refer back to records of the hang problem, perhaps a link between the two<br />

problems can be found. Perhaps there is no link. Without documentation of<br />

the progress of the problems, this possible link would not be investigated.<br />

With a problem tracking system, a colleague can take over a problem when<br />

the original person handling it is unavailable.<br />

• IBM <strong>Support</strong> Center<br />

The IBM <strong>Support</strong> Center (or the equivalent support personnel) should be<br />

contacted for difficult problems that cannot be resolved. The telephone<br />

number for the IBM <strong>Support</strong> Center and the customer number for the site<br />

should be readily available.<br />

6.3.5 Analyzing DB2 DDF Error Messages<br />

In a DRDA environment, if there is a communication problem, you always get an<br />

SQLCODE -30080 (see Figure 159).<br />

� �<br />

DSNE625I CONNECT TO LOCATION DB2REGNDIST01 PERFORMED, SQLCODE IS 30080<br />

DSNT408I SQLCODE = -30080, ERROR: COMMUNICATION ERROR 0008 (0000)<br />

DSNT418I SQLSTATE = 58019 SQLSTATE RETURN CODE<br />

DSNT415I SQLERRP = DSNLVCNS SQL PROCEDURE DETECTING ERROR<br />

� �<br />

Figure 159. DDF: SQLCODE -30080, Communication Error<br />

Figure 160 shows the corresponding MVS console message.<br />

� �<br />

DSNL501I - CNOS PROCESSING FAILED 339<br />

FOR LU LUDB2B AND MODE IBMRDB<br />

RTNCD=00 FDBK2=0B RCPRI=0008 RCSEC=0000 SENSE=087D0001<br />

� �<br />

Figure 160. DB2: DSNL501I Message<br />

Looking up SENSE=087D0001 in the VTAM Messages and Codes manual enables<br />

you to conclude that there was a problem establishing a session between the<br />

SSCPs.<br />

A distributed logical unit of work can run unsuccessfully for any number of<br />

reasons, including the following:<br />

• Local DDF is not started.<br />

• Remote DDF is not started.<br />

• Remote DB2 system is not started.<br />

• VTAM LU is not active.<br />

• VTAM path errors have occurred.<br />

• VTAM failure has occurred.<br />

• Session or conversation failures have occurred.<br />

• DB2 Communications <strong>Database</strong> entries are incorrect.<br />

• Security failure.<br />

In addition to the IBM DB2 Messages and Codes manual, you should refer to the<br />

DB2 - APPC/VTAM Distributed <strong>Database</strong> Usage Guide to understand DB2 DDF<br />

error messages.<br />

Chapter 6. Problem Determination 225


6.4 VM Host Problem Determination<br />

This section describes the main tools and facilities available in VM to monitor<br />

and debug the DataHub <strong>Support</strong>/VM platform.<br />

6.4.1 Monitoring DataHub <strong>Support</strong>/VM Gateways and Conversation<br />

To query DataHub <strong>Support</strong>/VM gateways and conversations, use the<br />

AGW QUERY command. as described in VM/ESA Connectivity Planning,<br />

Administration, and Operation.<br />

Query gateways to determine:<br />

• Which gateway is associated with DataHub <strong>Support</strong>/VM<br />

• The status of DataHub <strong>Support</strong>/VM gateways.<br />

Query conversations to determine:<br />

• The state of DataHub conversations<br />

• The remote LU<br />

• The alternate user ID<br />

• The state of DataHub conversations<br />

6.4.2 Handling System Messages<br />

226 DataHub Implementation and Connectivity<br />

• The DataHub <strong>Support</strong>/VM gateway with which the DataHub conversation is<br />

associated<br />

• The path ID (required to deactivate conversations).<br />

While you are running DataHub <strong>Support</strong>/VM, DataHub <strong>Support</strong>/VM-related<br />

messages may be displayed on your AVS console. Figure 161 on page 227<br />

shows an example of an AVS console file. The messages in the file are<br />

described below.


Ready;<br />

agw activate gateway oedgwsv2 private manager emqvcmg<br />

Ready;<br />

AGWVTB002I Gateway OEDGWSV2 is activated<br />

1 --► DMKSND045E SQLUSR1 not logged on<br />

CNOS for OEDGWSV2 OMMGWSV1 AGW2AGW1 has been completed<br />

CURRENT VALUES: LIMIT = 0020 WINNER = 0010 LOSER = 0010<br />

CNOS COMMAND VALUES: LIMIT = 0020 WINNER = 0010 LOSER = 0010<br />

AWGWVTC149I End<br />

SQLUSSR1 : Ready; T=0.01/0.02 08:15:33<br />

agw q gateway<br />

Ready;<br />

GATEWAY = OEDGWSV2 PRIVATE ACTIVE CONV COUNT = 00000000<br />

MANAGER = EMGVCMG<br />

AWGWVTC149I End<br />

agw deactive gateway oedgwsv2<br />

Ready;<br />

CNOS for OEDGWSV2 OMMGWSV1 AGW2AGW1 has been completed<br />

CURRENT VALUES: LIMIT = 0000 WINNER = 0010 LOSER = 0000<br />

CNOS COMMAND VALUES: LIMIT = 0020 WINNER = 0010 LOSER = 0010<br />

Ready;<br />

agw activate gateway oedgwsv2 private manager emqvcmg<br />

Ready;<br />

AGWVTB002I Gateway OEDGWSV2 is activated<br />

2 --► SQLUSR1 : CONNECT= 00:02:39 VIRTCPU= 000:000:37 TOTCPU= 000:01:28<br />

3 --► SQLUSR1 : LOGOFF AT 12:01:23 EST FRIDAY 10/01/93<br />

4 --► MSG FROM AVSVM:CSIABD228E Subtask of ′ AGWDSPMN′ failed-System abend 0C1-0000<br />

5 --► EMQV020S Error occurred in CMR while attempting ACTIVATE event.<br />

6 --► EMQV020S Reason Code = 07. DEACTIVATE Gateway CAIBMOML.OEDGWSV2.<br />

CNOS for OEDGWSV2 OMMGWSV1 AGW2AGW1 has been completed<br />

CURRENT VALUES: LIMIT = 0020 WINNER = 0010 LOSER = 0010<br />

CNOS COMMAND VALUES: LIMIT = 0020 WINNER = 0010 LOSER = 0010<br />

AGWVTC149I End<br />

Figure 161. DHS/VM: Sample DataHub <strong>Support</strong>/VM AVS Console<br />

Statement 1: DMKSND045E SQLUSR1 not logged on<br />

This statement is displayed whenever the DataHub <strong>Support</strong>/VM CMR logs off a<br />

Service Pool machine and it is not logged on. SQLUSR1 is the user ID of the<br />

Service Pool machine.<br />

A Service Pool machine is not logged on under the following conditions:<br />

• First time use<br />

• After an AVS abend<br />

• After it is logged off from the AVS console.<br />

Statements 2 and 3: SQLUSR1:CONNECT= ...LOGOFF AT ...<br />

These statements are displayed whenever the DataHub <strong>Support</strong>/VM logs off a<br />

Service Pool machine.<br />

Chapter 6. Problem Determination 227


6.4.3 Diagnosing Problems<br />

Statement 4: MSG FROM AVSVM: CSIABD228E subtask of ′AGWDSPMN′ failed<br />

-System Abend 0C1-0000<br />

This statement is displayed whenever the DataHub <strong>Support</strong>/VM CMR subtask<br />

abends. An alert record is also written to the CMR alert file, and a GCS dump is<br />

taken.<br />

Statements 5 and 6: EMQV020s Error occurred in CMR ...Reason Code=07 ...<br />

These statements are also displayed whenever the DataHub <strong>Support</strong>/VM CMR<br />

subtask abends. This and all other messages displayed by DataHub <strong>Support</strong>/VM<br />

are described in “DataHub <strong>Support</strong>/VM CMR Messages and Codes” of the<br />

DataHub <strong>Support</strong>/VM Installation and Operations Guide.<br />

When trying to resolve a problem, you should determine which DataHub<br />

<strong>Support</strong>/VM function was executing when the problem occurred; then refer to the<br />

problem involving either:<br />

• The CMR<br />

or<br />

• The Task Handler and the Tools Feature.<br />

For diagnosing a problem you have to know with which other products the<br />

components of DataHub <strong>Support</strong>/VM interact.<br />

The DataHub <strong>Support</strong>/VM components interact with other products as follows:<br />

The DataHub <strong>Support</strong>/VM CMR interacts with:<br />

• AVS using parameter lists associated with AVS events<br />

• GCS system services (including GCS Trace Facility)<br />

• CP using CP commands to control the Service Pool machine environment.<br />

The DataHub <strong>Support</strong>/VM Task Handler interacts with:<br />

• The system to dynamically load the tool module and DataHub <strong>Support</strong>/VM<br />

Common Services modules<br />

• CMS system services to:<br />

− Set up the tool environment<br />

− Specify an abend exit routine that will gain control should the Service<br />

Pool machine abend<br />

− Set up the trace environment, including trace facility management.<br />

• DataHub/2 to:<br />

228 DataHub Implementation and Connectivity<br />

− Issue CPI-C calls for program-to-program communication with DataHub/2.<br />

The Display Status function of the DataHub <strong>Support</strong>/VM tools feature interacts at<br />

the host with:<br />

• CMS system services to load the status of the shared segment.<br />

The Display Status function of the DataHub <strong>Support</strong>/VM tools feature interacts at<br />

the RDB with:


• The SQL/DS Resource Adapter to connect to like SQL/DS databases to get<br />

the status of those databases.<br />

The Copy Data function of the DataHub <strong>Support</strong>/VM tools feature interacts with:<br />

• The SQL/DS Resource Adapter to connect to like SQL/DS and unlike<br />

RDBMSs, and to fetch and insert data.<br />

The Utilities of the DataHub <strong>Support</strong>/VM tools feature interacts with:<br />

6.4.4 Problems Involving the CMR<br />

• The SQL/DS DBS utility to load data, unload data, or reorganize an index.<br />

If AVS abends, each DataHub <strong>Support</strong>/VM CMR is invoked with an abend event<br />

so that it can log off its set of Service Pool machines and put itself in shutdown<br />

mode.<br />

If one of the Service Pool machines abends, control is given to an abend exit,<br />

which causes the connection between the Service Pool machine and DataHub/2<br />

to be severed. The DataHub <strong>Support</strong>/VM CMR gains control with a deallocate<br />

event after the severance occurs and marks the Service Pool machine available<br />

for use. When VTAM notifies AVS of an Attention Loss between an LU 6.2<br />

session and the associated DataHub <strong>Support</strong>/VM LU, AVS invokes the associated<br />

DataHub <strong>Support</strong>/VM CMR. The CMR will sever all connections for that session<br />

and log off the Service Pool machines that are associated with the session. The<br />

Service Pool machines are reset and marked available for use.<br />

AVS Console Messages issued by AVS and the DataHub <strong>Support</strong>/VM<br />

CMR are displayed on the AVS console. See VM/ESA<br />

Connectivity Planning, Administration, and Operation for<br />

the messages displayed by AVS. See the chapter entitled<br />

″Messages and Codes″ in the DataHub <strong>Support</strong>/VM<br />

Installation and Operations Guide for the messages<br />

displayed by DataHub <strong>Support</strong>/VM CMR. Figure 161 on<br />

page 227 shows a sample AVS console.<br />

CMR Alert File The DataHub <strong>Support</strong>/VM CMR alert file contains the alert<br />

records of all DataHub <strong>Support</strong>/VM CMRs on the GCS<br />

system. This file will be created on the AVS machine′s<br />

A-disk when DataHub <strong>Support</strong>/VM CMR detects the first<br />

alert situation. Alert records are written under the<br />

following conditions:<br />

• Unexpected parameters from AVS<br />

• Missing or invalid control blocks<br />

• I/O errors<br />

• GETMAIN/FREEMAIN errors<br />

• CMR message repository errors<br />

• Unexpected errors from CP or GCS commands.<br />

Each alert record identifies:<br />

• The gateway where the problem occurred<br />

• The date and the time<br />

• The CMR module that issued the alert (for example,<br />

EMQVC00)<br />

Chapter 6. Problem Determination 229


• A brief description of the problem.<br />

Figure 162 shows the layout of alert records that are<br />

written to the DataHub <strong>Support</strong>/VM CMR alert file.<br />

* CMR Alert File Layout<br />

*<br />

*The set of records written for each Alert consist of 3 or more records:<br />

*First record specifies: timestamp and product/feature<br />

*Second record specifies:release/lvl, ABEND code, module, alert code<br />

*Third record specifies: alert string constant<br />

*Fourth and subsequent records specify: symptom string (in blocks of 80)<br />

93-123-08.15.55 PIDS/5688103 DataHub <strong>Support</strong>/VM Feature<br />

LVLS/330 AV/S0806 U0000 RIDS/EMQVCGM 3<br />

Error loading repository. Symptom data is: gateway name, R15.<br />

Gateway name = CAIBMOML.OEDGWSV2 R15 = 0004.<br />

93-123-08.16.10 PIDS/5688103 DataHub <strong>Support</strong>/VM Feature<br />

LVLS/330 AV/S0000 U0000 RIDS/EMQVC30 1<br />

WARNING: Service Pool machine not available. Symptom data is: gateway name.<br />

Gateway name = CAIBMOML.OEDGWSV2<br />

93-123-08.17.21 PIDS/5688103 DataHub <strong>Support</strong>/VM Feature<br />

LVLS/330 AV/S00C1 U0000 RIDS/EMQVC00 4 See also: GCS SYSTEM DUMP<br />

Unknown abend. Symptom data is gateway name, event name.<br />

Gateway name = CAIBMOML.OEDGWSV2 Event name = ACTIVE.<br />

93-123-08.18.09 PIDS/5688103 DataHub <strong>Support</strong>/VM Feature<br />

LVLS/330 AV/S0000 U0000 RIDS/EMQVC00 3<br />

SEVERE ERROR: CMR communication area is invalid. There is no symptom data.<br />

Figure 162. Sample DataHub <strong>Support</strong>/VM CMR Alert File<br />

This file has four alert records. The second record<br />

indicates that there were no Service Pool machines to<br />

process a DataHub/2 request. The third alert record<br />

indicates that a GCS system dump has been produced.<br />

GCS Trace File The DataHub <strong>Support</strong>/VM CMR uses the GTRACE macro to<br />

write the trace records in the trace table. The IBM <strong>Support</strong><br />

Center may want the GCS trace records for problem<br />

resolution.<br />

GCS System Dump A GCS system dump is taken whenever the DataHub<br />

<strong>Support</strong>/VM CMR cannot recover from an abend. The<br />

dump is sent to the reader of the AVS machine or to a<br />

specified user ID in the GCS collection. The IBM <strong>Support</strong><br />

Center may want the GCS trace records for problem<br />

resolution.<br />

6.4.5 Problems Involving the Task Handler and the Tools Feature<br />

230 DataHub Implementation and Connectivity<br />

Errors may occur at any time between DataHub/2 and DataHub <strong>Support</strong>/VM.<br />

After the error has occurred, if the conversation still exists between them, an<br />

attempt is made to end the conversation in a controlled manner. How the error<br />

is handled depends on where it is detected and the state of the detecting<br />

program.<br />

Four types of errors are related to CPI-C verbs:<br />

• Communication link failures


• Communication configuration errors<br />

• DataHub errors:<br />

− Internal error related to CPI-C verbs<br />

− Unexpected flow from DataHub/2<br />

• CPI-Communications system errors.<br />

DataHub <strong>Support</strong>/VM Trace File<br />

DataHub <strong>Support</strong>/VM tracing is controlled by DataHub/2.<br />

When tracing is enabled, DataHub <strong>Support</strong>/VM programs<br />

write trace information to the DataHub <strong>Support</strong>/VM trace<br />

file (see Figure 163). This file is created on the reader of<br />

the SYSADMIN user ID on the same VM system as the<br />

Service Pool machine.<br />

* Trace File Layout<br />

*<br />

4 1992-03-26-15.24.43.153174 EMQVT00 CS EX<br />

6Entered Task Handler.<br />

4 1992-03-26-15.24.43.154000 EMQVT00 CS IP<br />

6Entered Task Handler input parameter is: szResourceID=STARVIEW.<br />

4 1992-03-26-15.24.43.155003 EMQVT00 CS IN<br />

5Globals initialization complete. EMQCGLOBAL structure at address X′005CD344′: a<br />

5cEyeCatcher=EMQCGLOBAL, pHostSpecificGlobal=X′005CD4D8′, pEmqdaIn=X′00000000′,<br />

.<br />

.<br />

.<br />

.<br />

4 1992-03-26-15.24.43.155453 EMQVT10 CS IN<br />

5Entered function EMQVT10 with input parameters: pEmqcGlobal=X′005CD344′, pHostS<br />

5pecificGlobal=X′005CD4D8′, szResourceID=STARVIEW, ppToolTable=X′005CD12C′, psEx<br />

6ecuteProf=X′005CD14C′ .<br />

.<br />

.<br />

.<br />

.<br />

Figure 163. Example of DataHub <strong>Support</strong>/VM Trace File<br />

Note: For more information see “Diagnosis Problems” in<br />

the DataHub <strong>Support</strong>/VM Installation and Operations Guide.<br />

DataHub <strong>Support</strong>/VM Alert File<br />

The DataHub <strong>Support</strong>/VM alert file (Figure 164 on<br />

page 232) contains the alert records for the DataHub<br />

<strong>Support</strong>/VM Task Handler, tools feature, and Common<br />

Services.<br />

If abnormal or unusual error situations are encountered,<br />

alert records normally will be written to the reader of the<br />

SYSADMIN user ID on the same VM system as the Service<br />

Pool machine.<br />

Chapter 6. Problem Determination 231


* Alert File Layout<br />

*<br />

*The set of records written for each Alert consist of 3 or more records:<br />

*First record specifies: timestamp and product/feature<br />

*Second record specifies:release/lvl, ABEND code, module, alert code<br />

*Third record specifies: alert string constant<br />

*Fourth and subsequent records specify: symptom string (in blocks of 80)<br />

1993-08-12-10.54.34.123456 PIDS/5688103 DataHub <strong>Support</strong>/VM Feature<br />

LVLS/330 AB/S999 RIDS/EMQVT40 5<br />

EMQVT36 Returned unexpected return code. Symptom data is: bad return code.<br />

Figure 164. Sample DataHub <strong>Support</strong>/VM Alert File<br />

6.5 AS/400 Host Problem Determination<br />

6.5.1 The AS/400 Tools<br />

Note: For more information see “Diagnosing Problems” in<br />

the DataHub <strong>Support</strong>/VM Installation and Operations Guide.<br />

This section explains the tools and facilities that are available in the AS/400<br />

system to help with problem determination.<br />

As DataHub is a cooperative processing product, it is important to consider that,<br />

in a problem determination situation, the problem could be in the network, the<br />

workstation, or the AS/400 system. A debug approach should be hierarchical,<br />

beginning with the simple tools and ending using more complex tools, like a<br />

trace.<br />

To find information about a problem in a situation where for any reason DataHub<br />

does not work or you receive specific error messages, you should begin looking<br />

at the workstation, and after that, at the AS/400. The first place to look is the<br />

EMQCELOG.010 log, which is located at the workstation. You should read the log<br />

and try to understand the meaning of the error messages. If the log does not<br />

give you any clues about the problem, it is time to debug the problem directly on<br />

the AS/400 system.<br />

The tools that you can use for problem determination are:<br />

• Display configuration, network, lines, controllers, devices<br />

• Error messages<br />

• Job log<br />

• Spool files<br />

• QSYSOPR message queue<br />

• System log<br />

• Traces<br />

− Communication<br />

− DataHub<br />

232 DataHub Implementation and Connectivity


6.5.2 Network Problems<br />

The DataHub <strong>Support</strong>/400 and the DataHub/2 workstation uses one of two<br />

communication protocols, depending on the DataHub function that is being used.<br />

For example, Utilities use a private protocol (CPI-Communications side<br />

information at the OS/2 Communications Manager) or LU 6.2 protocol. Depending<br />

on the function you use, a different approach should be followed. Table 1 on<br />

page 17 shows the functions and respective protocol used; you should use the<br />

information in the table to drive your debug approach.<br />

This section shows the AS/400 commands in mnemonic form. If you are not<br />

familiar with the AS/400 platform, it is possible to navigate through the AS/400<br />

using the OS/400 menus.<br />

The AS/400 network verification steps are:<br />

1. Verify the line description (Work with Configuration Status)<br />

WRKCFGSTS *LIN<br />

Figure 165 shows an AS/400 WRKCFGSTS command output screen. If the<br />

TRLINE is active, as it is in Figure 165, everything will work properly.<br />

However, if it is not active, you must vary it offline, and vary it back online.<br />

Work with Configuration Status SJAS400A<br />

03/20/93 08:51:01<br />

Position to . . . . . Starting characters<br />

Type options, press Enter.<br />

1=Vary on 2=Vary off 5=Work with job 8=Work with description<br />

9=Display mode status ...<br />

Opt Description Status -------------Job--------------<br />

TRLINE ACTIVE<br />

TRCTLJOE ACTIVE<br />

DHCLIENT ACTIVE<br />

QPCSUPP ACTIVE/TARGET DHCLIENT QSECOFR 004030<br />

QPCSUPP ACTIVE/TARGET QXFSERV QSECOFR 004035<br />

QPCSUPP ACTIVE/TARGET QXFSERV QSECOFR 004035<br />

QPCSUPP ACTIVE/TARGET DHCLIENT QSECOFR 004038<br />

QPCSUPP ACTIVE/TARGET DHCLIENT QSECOFR 004039<br />

QPCSUPP ACTIVE/TARGET DHCLIENT QSECOFR 004052<br />

Bottom<br />

Parameters or command<br />

===><br />

F3=Exit F4=Prompt F12=Cancel F23=More options F24=More keys<br />

Figure 165. AS/400: Verifying Communications<br />

2. Verify the AS/400 log (Display Message Queue QSYSOPR):<br />

DSPMSG QSYSOPR<br />

Figure 166 on page 234 shows a display of the QSYSOPR queue.<br />

Chapter 6. Problem Determination 233


234 DataHub Implementation and Connectivity<br />

5738SS1 V2R2M0 920925 SJAS400A 03/20/93 09:09:13<br />

Display Device . . . . . : DSP01<br />

User . . . . . . . . . . : QSECOFR<br />

Work with Messages<br />

System: SJAS400A<br />

Messages in: QSYSOPR<br />

Type options below, then press Enter.<br />

4=Remove 5=Display details and reply<br />

Opt Message<br />

QWTSCVMXA distribution queue.<br />

SNADS router 003916/QSNADS/QROUTER started.<br />

Subsystem QSNADS started.<br />

Subsystem QSNADS in library QSYS starting.<br />

All sessions ended for device LUSQLDB2.<br />

Communications device LUDB23 was allocated to subsystem QCMN.<br />

Program start request received on communications device SJA2019I00<br />

was rejected with reason codes 713, 0.<br />

Communications device SJA2019I00 was allocated to subsystem QCMN.<br />

Controller SJA2019I contacted on line TRLINE.<br />

An adapter has inserted or left the token-ring on line TRLINE.<br />

An adapter has inserted or left the token-ring on line TRLINE.<br />

F1=Help F3=Exit F5=Refresh F16=Remove messages not needing a reply<br />

F17=Top F18=Bottom F24=More keys<br />

Figure 166. AS/400: DSPMSG QSYSOPR Command Output<br />

The QSYSOPR message queue contains all error messages that the AS/400<br />

sends, including communication errors. If there is any message related to<br />

your problem, move the cursor to any point over the message, and press<br />

PF1 or the Help key to get the detailed information.<br />

3. Check the spooled files of the DataHub user<br />

Another way to verify AS/400 problems is to receive the error message in a<br />

spool file. In a debug situation it is better to have the highest level of<br />

message logging. On the AS/400, the DataHub user profile should have an<br />

assistance level of *ADVANCED. The assistance level is the ASTLVL<br />

parameter on the user profile. Also verify that the job description associated<br />

with the user profile has the parameters Level 4, Severity 00, and Text<br />

*SECLVL.<br />

Figure 167 on page 235 shows the AS/400 Change User Profile screen and<br />

the parameter ASTLVL.


Change User Profile (CHGUSRPRF)<br />

Type choices, press Enter.<br />

User profile . . . . . . . . . . USRPRF > DHJOSE<br />

User password . . . . . . . . . PASSWORD *SAME<br />

Set password to expired . . . . PWDEXP *NO<br />

Status . . . . . . . . . . . . . STATUS *ENABLED<br />

User class . . . . . . . . . . . USRCLS *PGMR<br />

Assistance level . . . . . . . . ASTLVL *ADVANCED<br />

Current library . . . . . . . . CURLIB *CRTDFT<br />

Initial program to call . . . . INLPGM *NONE<br />

Library . . . . . . . . . . .<br />

Initial menu . . . . . . . . . . INLMNU MAIN<br />

Library . . . . . . . . . . . *LIBL<br />

Limit capabilities . . . . . . . LMTCPB *NO<br />

Text ′ description′ . . . . . . . TEXT ′ DataHub User′<br />

Figure 167. AS/400: Parameter ASTLVL in User Profile<br />

Figure 168 shows the AS/400 job description, with the message logging.<br />

Display Job Description<br />

Job description: QDFTJOBD Library: QGPL<br />

Message logging:<br />

Level . . . . . . . . . . . . . . . . . . . . : 4<br />

Severity . . . . . . . . . . . . . . . . . . . : 0<br />

Text . . . . . . . . . . . . . . . . . . . . . : *SECLVL<br />

Log CL program commands . . . . . . . . . . . . : *NO<br />

Accounting code . . . . . . . . . . . . . . . . : *USRPRF<br />

Print text . . . . . . . . . . . . . . . . . . . : *SYSVAL<br />

Routing data . . . . . . . . . . . . . . . . . . : QCMDI<br />

Request data . . . . . . . . . . . . . . . . . . : *NONE<br />

Device recovery action . . . . . . . . . . . . . : *SYSVAL<br />

Figure 168. AS/400: Message Logging in Job Description<br />

To view the spool files for the DataHub user, type the command WRKSPLF<br />

userid, where userid is the user profile of the DataHub user.<br />

Figure 169 on page 236 shows detailed information about an error message.<br />

The same information can be found at the spooled file, QPJOBLOG.<br />

Chapter 6. Problem Determination 235


Additional Message Information Page 1<br />

5738SS1 V2R2M0 920925 SJAS400A 03/19/93 19:05:44<br />

Message ID . . . . . . : CPF1269Severity . . . . . . . : 00<br />

Date sent . . . . . . : 03/17/93 Time sent . . . . . . : 09:00:07<br />

Message type . . . . . : Information<br />

From job . . . . . . . . . . . : LUSQLVMA<br />

User . . . . . . . . . . . . : STSQLB<br />

Number . . . . . . . . . . . : 003655<br />

Message . . . . : Program start request received on communications device<br />

LUSQLVMA was rejected with reason codes 605, 1506.<br />

Cause . . . . . : The program start request was rejected in job<br />

003655/STSQLB/LUSQLVMA. The device belongs to remote location LUSQLVMA. If<br />

the device is an advanced program-to-program communications (APPC) dev ice,<br />

the program start request was received on mode IBMRDB with unit-of-work<br />

identifier USIBMSC.LUSQLVMA-A732A86F56AD-0001.<br />

Recovery . . . : For more information on reason codes and their meanings,<br />

see the Communications: Intersystem Communications Function Programmer′ s<br />

Guide, SC41-9590. See the job log for more information about the problem.<br />

Figure 169. AS/400: Operator Message Queue Display Screen<br />

4. Check the system history log<br />

Use the DSPLOG (Display LOG) command to check all the messages on the<br />

AS/400. We recommend that you press F4 (Prompt) and specify a starting<br />

date and time; otherwise you will receive the entire system log.<br />

5. Use traces<br />

Depending on the information that you find in the previous logs, you may<br />

want to use the AS/400 trace facilities.<br />

There are three communications traces (ICF, CPI, JOB); you should use the<br />

trace that is appropriate for the problem.<br />

6.5.3 Return Codes and Error Messages<br />

6.5.4 SQL Return Codes<br />

236 DataHub Implementation and Connectivity<br />

Important error codes are recorded in the system log or in the QSYSOPR<br />

message queue. We strongly recommend that you use the AS/400 help facility to<br />

get detailed information about the messages. If you do not understand the<br />

meaning of any AS/400 message, the help facility tells you where you can find<br />

the respective information.<br />

To verify the meaning of an SQLCODE in the AS/400, you can use the DSPMSGD<br />

(Display Message Description) command.<br />

Suppose you are looking for the meaning of SQLCODE -0551. You can use the<br />

command:<br />

DSPMSGD RANGE(SQL0551) MSGF(QSQLMSG)<br />

Figure 170 on page 237 shows the output screen for the DSPMSGD command.


6.5.5 Request Codes<br />

6.5.6 Security Errors<br />

range(sql0551) msgf(qsqlmsg)<br />

Display Device . . . . . : DSP01<br />

User . . . . . . . . . . : QSECOFR<br />

Select Message Details to Display<br />

System: SJAS400A<br />

Message ID . . . . . . . . . : SQL0551<br />

Message file . . . . . . . . : QSQLMSG<br />

Library . . . . . . . . . : QSYS<br />

Message text . . . . . . . . : Not authorized to object &1 in &2 type *&3.<br />

Select one of the following:<br />

1. Display message text<br />

2. Display field data<br />

5. Display message attributes<br />

30. All of the above<br />

Selection<br />

F3=Exit F12=Cancel<br />

(C) COPYRIGHT IBM CORP. 1980, 1992.<br />

Figure 170. AS/400 Detailed SQLCODE<br />

The request codes that you find in the job log, system log, or spool file log can<br />

be identified in the AS/400 Communications APPC Programmer′s Guide<br />

(SC41-8189).<br />

The AS/400 identifies everything in the system as an object. The proper<br />

authority must be granted to allow DataHub users to access the AS/400 objects<br />

or functions. Otherwise the operation will be denied, and an error message<br />

issued.<br />

Figure 171 shows the message that a DataHub user receives if the authority was<br />

not granted to the DataHub package before the user tries to access AS/400<br />

services or functions.<br />

Additional Message Information<br />

5738SS1 V2R2M0 920925 SJAS400A 03/19/93 18:57:43<br />

Message ID . . . . . . : CPF2218 Severity . . . . . . . : 30<br />

Date sent . . . . . . : 03/16/93 Time sent . . . . . . : 09:41:27<br />

Message type . . . . . : Information<br />

From job . . . . . . . . . . . : QSYSARB<br />

User . . . . . . . . . . . . : QSYS<br />

Number . . . . . . . . . . . : 003252<br />

Message . . . . : User DHJOSE not authorized to *SQLPKG QEMQUSER/EMQ1010<br />

Cause . . . . . : Job 003524/DHJOSE/AS400LU02 does not have authority to<br />

object EMQ1010 in library QEMQUSER with object type *SQLPKG. If the name of<br />

the job is not displayed, the job completed before the message was sent.<br />

Recovery . . . : Have the security officer give user DHJOSE authority to<br />

object EMQ1010.<br />

<strong>Technical</strong> description . . . . . . : The authorization event detected was<br />

00020101.<br />

Figure 171. AS/400 Security Error Message<br />

Chapter 6. Problem Determination 237


To grant authority to any user, you can use the following AS/400 commands:<br />

• EDTOBJAUT<br />

• GRTOBJAUT<br />

• SQL command grant (if the object is a package, table, or view).<br />

6.5.7 Debugging Tools Conversation Data Flow Errors at AS/400<br />

6.5.8 EMQ9088E Message<br />

Some DataHub functions like RDBMS Work and the Utilities (for example, unload,<br />

reload, and reorganize) use tools conversation data flows, as described in the<br />

OS/2 Communications Manager CPI Communications side information definition.<br />

When using DataHub/2 to access any SAA RDB server, the error messages are<br />

also logged at the workstation EMQCELOG.010 log file.The log contains the<br />

detailed error message information to use to debug the problem that you are<br />

having.<br />

Note: Not only are private protocol messages logged in EMQCELOG.010; all<br />

messages that you receive when using DataHub/2 are logged. This<br />

includes DRDA, SQL, syntax, and other error messages.<br />

The EMQ9088E message is issued (you can see it in the EMQCELOG.010 log). The<br />

message contains detailed error information. The most probable errors that<br />

could occur are:<br />

• EMQ9088E No_Host_user_id<br />

Verify that the profile you have defined at the OS/2 Communications<br />

Manager CPI Communications side information matches the AS/400 user<br />

profile, including the password.<br />

• EMQ9088E Emq_failure_no_retry<br />

Verify userid, password, communications, OS/2 <strong>Database</strong> Manager<br />

• EMQ9088E SQLERROR<br />

Verify the meaning of the SQLCODE. You can verify the SQLCODE using the<br />

DSPMSGD command at an AS/400 terminal session:<br />

DSPMSGD RANGE(SQLXXXX) MSGF(QSQLMSG)<br />

where XXXX is the SQLCODE. It is also possible to verify the SQLCODES in<br />

any SAA RDB messages manual, because they are the same, for example,<br />

-0551 means no authority, -0204 means object not found.<br />

• The profile that is being used at OS/2 Communications Manager CPI<br />

Communications side information must exist at the AS/400, and a valid<br />

password must be specified.<br />

• EMQ9088E Connect Error<br />

238 DataHub Implementation and Connectivity<br />

The problem could be the definitions at the OS/2 Communications Manager.<br />

The CPI Communications side information definition must match the AS/400<br />

Partner LU definition. Verify the AS/400 SYSOPR message, and the spool<br />

files for the userid. Verify the physical connection between the DataHub<br />

workstation and the AS/400, for example, the lines, controllers, and devices.<br />

At an AS/400 session verify the network status using the WRKCFGSTS (Work<br />

with Configuration Status) command.


6.5.9 Traces<br />

• EMQ9251W CM_TP_NOT_AVAILABLE_NO_RETRY<br />

The OS/2 Communications Manager CPI Communications side information<br />

parameter TPNAME must be QDMU/QSTSC.<br />

It is possible to use AS/400 trace facilities for problem determination. The<br />

recommended approach is to search in this sequence:<br />

1. DataHub EMQCELOG.010 file<br />

2. AS/400 QSYSOPR message queue<br />

3. AS/400 system log<br />

4. AS/400 spool file.<br />

After you have looked in all the log files and the spool files, and you have not<br />

found out what the problem is, you can start a trace based on the symptoms you<br />

have found in the log.<br />

The AS/400 traces are:<br />

• Communications traces<br />

It is possible to specify the object (a line, a controller) you want to trace.<br />

• ICF traces<br />

Used to trace all intersystem communications functions.<br />

• CPI-C Traces<br />

Traces all CPI Communications that occur in a job in which the command is<br />

entered.<br />

• Job traces (TRCJOB or STRSRVJOB)<br />

It is possible to trace a specific job at the AS/400. To identify which job you<br />

want to trace, use the WRKACTJOB (Work with Active Jobs) command to see<br />

the job attributes required to start a trace.<br />

• AS/400 GO CMDTRC command<br />

6.6 OS/2 Host Problem Determination<br />

This command displays a menu with many trace and debug options that<br />

enable you to identify the correct trace facility to use for your problem.<br />

On OS/2 managed hosts, you can use the same OS/2, OS/2 <strong>Database</strong> Manager,<br />

OS/2 Communications Manager, and DDCS/2 tools for problem determination as<br />

you use on the DataHub/2 workstation. However, people skilled in the use of<br />

those tools are usually not available at the OS/2 host location.<br />

Therefore, simple procedures could be set up to help the user start and stop the<br />

appropriate trace utility and create the output to be sent to the appropriate<br />

person.<br />

The Distributed Console Access Facility/2 (DCAF/2) program could be used to<br />

access the remote OS/2 host from a central point. Then a specialist could do the<br />

appropriate problem determination remotely.<br />

Chapter 6. Problem Determination 239


240 DataHub Implementation and Connectivity<br />

When the tools conversation protocol is being used between the DataHub/2 and<br />

the DataHub <strong>Support</strong>/2 workstations, the DataHub/2 user could request a trace to<br />

be performed on the DataHub <strong>Support</strong>/2 host. The trace will be stored in a<br />

directory specified by the TRACEPATH parameter in the DataHub <strong>Support</strong>/2<br />

configuration file as shown in Figure 172.<br />

Alert files are another source of information to use for problem determination on<br />

the DataHub <strong>Support</strong>/2 platform. Those files are stored in a directory that is<br />

specified by the ALERTPATH entry in the DataHub <strong>Support</strong>/2 configuration file.<br />

More information on these alerts can be found in the DataHub <strong>Support</strong>/2<br />

Installation and Operations Guide.<br />

* DataHub <strong>Support</strong>/2 Configuration File<br />

TOOLTABLE=D:\EMQ2\EMQ2TOOL.CFG<br />

ALERTPATH=D:\EMQ2\ALERT<br />

TRACEPATH=D:\EMQ2\TRACE<br />

RDBNAMEMAP=D:\EMQ2\DHS2.MAP<br />

Figure 172. Problem Determination Entries in DataHub <strong>Support</strong>/2 Configuration File.<br />

Sample EMQ2SYS.CFG file.


Chapter 7. Security Considerations<br />

This chapter presents an overview of the different security systems used on the<br />

DataHub platforms and explains how the different platforms interact with each<br />

other concerning security. It also deals with the issue of password modification<br />

as enforced in most organizations and the management of those changes.<br />

Finally, it proposes some recommendations to help you control DataHub security<br />

issues.<br />

7.1 Security on the DataHub/2 Workstation<br />

7.1.1 User Profile Management<br />

This section describes the security system used on the DataHub/2 workstation.<br />

It also discusses the security implications of configurations involving a<br />

DataHub/2 database server, an OS/2 LAN server, and a DDCS/2 gateway system.<br />

As explained in 1.2, “DataHub Data Flows: Overview” on page 6, several<br />

protocols are used to connect a DataHub/2 workstation to the different managed<br />

hosts. The protocol used to send the request to the managed host determines<br />

which security information is sent with the request.<br />

If the request uses either the DRDA or RDS protocol, the User Profile<br />

Management security system is used to define which userid/password<br />

combination is sent to the managed host. If the tools conversation protocol is<br />

used, the DataHub/2 user profile security information is used.<br />

User Profile Management (UPM) controls access of certain resources available<br />

on an OS/2 workstation. Both the LAN server and OS/2 <strong>Database</strong> Manager<br />

(DB2/2 or Extended Services for OS/2) use UPM security.<br />

Different versions of UPM are delivered with different versions of those products.<br />

LAN Server version 3 and DB2/2 deliver version 3 of UPM. LAN Server version 2<br />

and Extended Services for OS/2 deliver UPM version 2. This implies that you<br />

need to plan your software installation carefully. Extended Services for OS/2<br />

should not be installed after LAN Server version 3, and LAN Server version 2<br />

should not be installed after DB2/2. But LAN Server version 3 and Extended<br />

Services for OS/2 could work on the same machine.<br />

When you install Extended Services for OS/2, LAN server/requester, or DB2/2,<br />

UPM is automatically installed or updated on the workstation. A new group of<br />

applications is added to the OS/2 desktop. This group provides the LOGON,<br />

LOGOFF, and UPM applications.<br />

UPM provides you with programs and menus to:<br />

• Manage users<br />

• Manage groups<br />

• Manage your personal information (password, logon user profile).<br />

Naturally, you will need to have the appropriate authority in order to manage<br />

users and groups on the appropriate domain.<br />

UPM manages the local domain (OS/2 <strong>Database</strong> Manager) and the LAN domain.<br />

© Copyright IBM Corp. 1993 241


Before we explain what those domains are, we should point out that, if a<br />

workstation is used as a LAN server or as a LAN domain controller and as an<br />

OS/2 <strong>Database</strong> Manager database server, only one security system is used for<br />

both domains. For example, when an administrator adds a user to the LAN<br />

domain, that user is also given access to OS/2 <strong>Database</strong> Manager databases.<br />

That user will have access to authorities granted to the group to which he or she<br />

belongs (that is, the UPM group) and to the authorities granted to public.<br />

Local Domain<br />

The local domain is the local workstation domain. Through the local domain,<br />

you control access and privileges to local OS/2 <strong>Database</strong> Manager resources<br />

like OS/2 <strong>Database</strong> Manager directories. Even if your workstation does not have<br />

a local database (that is, it uses the Distributed feature of OS/2 <strong>Database</strong><br />

Manager), you still have to log on to the local workstation to access the local<br />

database directories (system and workstation directories) and a remote<br />

database server or DDCS/2 gateway machine.<br />

One way to log on to the local OS/2 <strong>Database</strong> Manager domain is to type this<br />

command on the OS/2 command line:<br />

LOGON /p= /L<br />

You could also execute the LOGON program from the UPM folder. Or, a<br />

program could do the logon for you through the UPM APIs.<br />

Recommendation<br />

242 DataHub Implementation and Connectivity<br />

Every DataHub/2 user should put the LOGON command (that is, LOGON /L<br />

and LOGON /D= (if applicable)) in the OS/2<br />

STARTUP.CMD file to automate the logon process.<br />

Although not recommended, the userid and password information could differ<br />

between your local OS/2 workstation and other systems (OS/2 and non-OS/2<br />

systems). UPM provides a solution through the user logon profile facility. This<br />

facility, which is available from the local domain environment, permits a user to<br />

associate a userid and password with a specific node (workstation). The node<br />

name indicated in the user logon profile must have the same value as the<br />

workstation alias name in the OS/2 <strong>Database</strong> Manager workstation directory.<br />

Note: Please note that UPM uses the term node and that OS/2 <strong>Database</strong><br />

Manager uses the terms workstation and node for the same object.<br />

You do not have to use the user logon profile if you access all RDBMSs with the<br />

same userid and password.<br />

To access the user logon profile facility, these are the steps you need to perform<br />

if your workstation is NOT connected to a LAN domain controller:<br />

1. Select the User Profile Management folder from your OS/2 desktop.<br />

2. Select the User Profile Management application from the UPM folder.<br />

The User Profile window will appear.<br />

3. Select Actions from the Action menu.<br />

4. Select the Add/change user logon profile... from the Actions pull-down<br />

menu. If you are logged on to the LAN domain, this action will not be


available on the pull-down menu. You will need to select Use Domain from<br />

the same pull-down menu (see the procedure below).<br />

You need to perform the following steps if your workstation is connected to a<br />

LAN domain controller:<br />

1. Select the User Profile Management folder from your OS/2 desktop.<br />

2. Select the User Profile Management application from the UPM folder.<br />

3. Select Actions from the Action menu.<br />

4. Select Use domain... from the Actions pull-down menu.<br />

The Use Domain window will appear showing the active domain.<br />

5. Click on the Local radio button.<br />

6. Click on the OK push button.<br />

The User Profile window will appear.<br />

7. Select Actions from the Action menu.<br />

8. Select Add/change user logon profile... from the Actions pull-down menu.<br />

If you do not have a user logon profile, you could log on using this command on<br />

the OS/2 command line:<br />

LOGON /p= /N=<br />

If you did not use the user logon profile, did not log on using the node logon<br />

command, and are not logged on (local logon), the logon process to the target<br />

host will not succeed. You will then get an error message and be presented<br />

with a menu to log on to that particular host.<br />

Figure 173 on page 244 shows the logic related to security information between<br />

the different components, whether the database you want to access is local or<br />

remote to your workstation.<br />

Chapter 7. Security Considerations 243


244 DataHub Implementation and Connectivity<br />

Figure 173. User Profile Management: Security Flow Diagram<br />

LAN Domain<br />

The second domain which is controlled by User Profile Management (UPM) is<br />

the LAN domain. The LAN domain, as used in scenario 3, requires the user to<br />

be defined to the domain. To access any resources, such as the DataHub/2<br />

programs and help facilities, the user needs to log on to the domain. The OS/2<br />

command that could be used is:<br />

LOGON /p= /D=<br />

You could also execute the LOGON program from the UPM folder, or a program<br />

could do the logon for you through the UPM APIs.


Security Aspects for the Three Scenarios<br />

As explained in Chapter 4, “DataHub/2 Workstation” on page 105, many<br />

DataHub/2 configurations are possible. Each configuration will have different<br />

impacts on security. This section discusses two different configurations: (1)<br />

DataHub/2 stand-alone configuration (scenarios 1 and 2) and (2) the LAN server,<br />

DataHub/2 server, and the DDCS/2 gateway installed on the same machine<br />

(scenario 3). It also describes the impact of having the LAN server, DataHub/2<br />

server, and DDCS/2 gateway installed on different machines.<br />

Scenario 1: This configuration implies that three security systems are used:<br />

one on the DataHub/2 workstation and one on each of the OS/2 hosts. The<br />

DataHub/2 database is managed locally and is usually used by only one user.<br />

No group needs to be defined to grant authorities to the DataHub/2 database.<br />

Two workstations will need to be defined on the DataHub/2 workstation in the<br />

OS/2 <strong>Database</strong> Manager workstation (node) directory.<br />

If different combinations of userid and password are used on each OS/2 system,<br />

we recommend that you create a separate user logon profile entry and a<br />

separate OS/2 <strong>Database</strong> Manager workstation entry for each OS/2 host.<br />

Scenario 2: From a security point of view, scenario 2 is more complex than<br />

scenario 1 because of the additional hosts. However, on the DataHub/2<br />

workstation, this configuration implies having to add user logon profile entries if<br />

the userid and password definitions are different on those new hosts. In our<br />

case, we were using different userids on the New York (MVS) and Toronto (VM)<br />

systems. Using the UPM user logon profile and different OS/2 <strong>Database</strong><br />

Manager workstation names for each host helped us automate the logon process<br />

to the different hosts (that is, we eliminated the prompts for userid and<br />

password).<br />

Recommendation<br />

If you are using different userids or passwords on the target hosts, we highly<br />

recommend that you use a different workstation name for each host even if<br />

you are using a DDCS/2 gateway machine to get to those hosts.<br />

Take a look at our OS/2 <strong>Database</strong> Manager client workstation directory<br />

definitions in Figure 61 on page 87. We used different workstation names<br />

even though we were using the same workstation (DDCS/2 gateway machine)<br />

to connect to the target RDBMS. We used this configuration for scenario 3.<br />

Scenario 3: In scenario 3, the LAN server, the DataHub/2 server, and the<br />

DDCS/2 gateway are installed on the same machine. The only installation factor<br />

to consider is the version of LAN server and OS/2 <strong>Database</strong> Manager being<br />

installed as discussed in the previous section.<br />

This configuration is the simplest of all DataHub/2 configurations because only<br />

one security system is installed on the machine. Both LAN server and OS/2<br />

<strong>Database</strong> Manager use the same security definitions. You need to define the<br />

users and the groups only once. This scenario is recommended for a multiple<br />

user configuration that uses the LAN server to share the DataHub/2 code and<br />

data and where all users share the DataHub/2 database.<br />

Chapter 7. Security Considerations 245


From a security point of view, the difference between scenario 2 and 3 is the fact<br />

that users will need to log on to the LAN domain in order to use the DataHub/2<br />

code.<br />

Other Configurations: Many other configurations are possible in a LAN<br />

environment. Before you opt to use any of the following configurations, consider<br />

their security aspects:<br />

• One server: a LAN server<br />

7.1.2 DataHub/2 User Profiles<br />

246 DataHub Implementation and Connectivity<br />

− LAN server machine to share the DataHub/2 product<br />

− Local DataHub/2 database and local DDCS/2 gateway<br />

• One server: a DDCS/2 gateway<br />

− DDCS/2 gateway machine<br />

− Local DataHub/2 database and code<br />

• Two servers: a LAN server and a DDCS/2 gateway<br />

− LAN server machine to share the DataHub/2 product<br />

− Local DataHub/2 database<br />

− DDCS/2 gateway machine<br />

• Three servers: a LAN server, a DataHub/2 server, and a DDCS/2 gateway<br />

− LAN server machine to share the DataHub/2 product<br />

− DataHub/2 database server<br />

− DDCS/2 gateway machine.<br />

Each of these configurations implies the use of multiple security systems (for<br />

example, OS/2 UPM) that need some coordination. With these systems,<br />

password modification in one system is not automatically reflected in the other.<br />

A (costly) solution is to install the LAN server code on each server. Because<br />

password changes are broadcast among all LAN servers controlled by the same<br />

domain controller, the password changes of the LAN side will be reflected on<br />

each OS/2 <strong>Database</strong> Manager system.<br />

DRDA or RDS requests use the OS/2 UPM facility to provide the security<br />

information to the target hosts. In the case of the DataHub tools conversation<br />

protocol, the DataHub/2 user profile security information is used in the requests<br />

sent to the DataHub <strong>Support</strong> components.<br />

Each user can have multiple profiles, but only one profile is active at one time.<br />

To send a request to the target host, there must be an entry associated with that<br />

host in the active DataHub/2 user profile. If this entry is not defined, you will find<br />

a message like this one in the DataHub/2 log:<br />

011 COPY D:\EMQDIR\EXE\EMQDTTOP.EXE 1993-03-10-11.04.03.97<br />

EMQ9088E The Common Service, EmqConnect, has returned with return code,<br />

EMQ_NO_HOST_USER_ID.<br />

The tricky question is, Which host is the target host? When the request is a<br />

reorganize, the answer is very easy because the process involves only one host.


But when the request is a Copy Data that involves two hosts, it is more difficult<br />

to figure out which DataHub/2 user profile entry will be sent along with the<br />

request. Figure 174 on page 247 will help you understand which host will get<br />

the DataHub/2 request and, by the same logic, which DataHub/2 user profile<br />

entry will be sent with the request. The complexity is caused by the fact that the<br />

OS/2 <strong>Database</strong> Manager can only be a requester in a DRDA network. When a<br />

Copy Data function is requested that involves one OS/2 <strong>Database</strong> Manager<br />

database and one DRDA server RDBMS, the request will always be sent to the<br />

DataHub <strong>Support</strong>/2 component, which will process the request.<br />

Because the security information sent with the Copy Data request is used for<br />

both conversation security validation on the target host and the Copy Data<br />

function itself, in order to succeed, the Copy Data function requires that the<br />

security information be the same on both the source and the target RDBMS.<br />

Figure 174. DH/2: User Profile Entry Selection Diagram<br />

Chapter 7. Security Considerations 247


7.1.3 DataHub/2 <strong>Database</strong> Privileges<br />

7.2 MVS Security<br />

If you have only one DataHub/2 workstation as in scenario 1 and 2, you will not<br />

need to be concerned about privileges. For scenario 3 (multiple database<br />

administrators and probably help-desk personnel), you should use OS/2 UPM<br />

groups to assign privileges to the DataHub/2 package and database tables.<br />

Before granting privileges to the DataHub/2 database tables, you will need to<br />

grant execute on the DataHub/2 package to all DataHub/2 users. This should be<br />

done using the different groups defined in the OS/2 UPM. Please refer to the<br />

DataHub/2 Installation and Administration Guide for more information on the<br />

package name and the different forms of grant commands, which differ from<br />

platform to platform. The package name for the first release of DataHub/2 is<br />

QEMQUSER.EMQP1010.<br />

To grant privileges to the different DataHub/2 database tables, DataHub/2<br />

provides you with this command:<br />

EMQGRANT <br />

The value could be USER or ADMIN. A USER value gives<br />

select-only authority, which is appropriate for help-desk personnel. The ADMIN<br />

value gives all privileges to the group or a specific user.<br />

If DataHub/2 ships requests that DB2 is to process, DB2 should be connected to<br />

a network. DB2 is one of the many LUs that have been defined to VTAM, as<br />

mentioned in section 3.4.2, “DRDA Definitions” on page 47 and shown in<br />

Figure 24 on page 48. Before your request gets to DB2, DB2 has to connect to<br />

VTAM. This is the first check that is done before DB2 connects to the network<br />

and a session is established with VTAM. The first check is done to prevent<br />

unauthorized connection to the network.<br />

If DB2 to VTAM connection is allowed, the next step is to verify whether the<br />

partner LU that is sending the request (DataHub/2) is valid or not. This check is<br />

made by RACF and VTAM.<br />

Finally, when the conversation is established, DB2 will check whether it can<br />

accept and process the request coming from DataHub/2<br />

7.2.1 DB2 Attachment to the Network Check<br />

Before installing the DB2 distributed database capability, you have to choose a<br />

LOCATION, a LUNAME, and a password, which should be provided to the DB2<br />

installation CLIST. This will generate the required job to include these definitions<br />

in the DB2 BSDS. When defining DB2 to VTAM, the LUNAME is the APPL macro<br />

label, and the password is checked against the APPL PRTCT parameter. These<br />

definitions allow VTAM to control which DB2 can be connected to the network.<br />

7.2.2 VTAM and RACF Partner LU Validation<br />

248 DataHub Implementation and Connectivity<br />

The validation of the LU that is sending the request is done by VTAM and RACF.<br />

If you specify VERIFY=REQUIRED on the APPL statement when defining your<br />

DB2, you are activating the requesting partner LU verification. RACF and VTAM<br />

will check if the request comes from a valid LU. If you specify VERIFY=NONE,


7.2.3 DB2 Request Checking<br />

you are indicating that you do not want connection verification, and any LU can<br />

connect to your DB2.<br />

Once the conversation is established, the requester application (DataHub/2)<br />

sends a “remote attach request,” which is submitted to a security check<br />

mechanism that will conform to what you have specified in the APPL SECAPT<br />

VTAM parameter and in the communication database tables. You should code<br />

SECACPT=ALREADYV on the VTAM APPL statement to ensure that all<br />

conversations received contain an authorization ID; they may contain a<br />

password. If the password is supplied by the requester, it will be passed to RACF<br />

at the local site to be checked.<br />

Refer to Figure 175 on page 250 for a overview of the security mechanism. You<br />

will find more details in the DB2 and VTAM literature. The steps taken at the<br />

server site are:<br />

Password check If no password arrived with the remote request, DB2<br />

checks the acceptance option specified in the<br />

USERSECURITY column in the CDB<br />

SYSIBM.SYSLUNAMES table. See steps 1 and 2.<br />

SYSLUNAMES check If the USERNAMES column of SYSLUNAMES contains an<br />

I or B, the authorization ID that comes from the<br />

requester is subject to translation. Translation is done<br />

according to what is specified in the<br />

SYSIBM.SYSUSERNAMES table. If it is not an I or B, it is<br />

RACF checked anyway. Refer to step 3.<br />

ID check for connections DB2 calls RACF for verification. If the ID or password<br />

cannot be verified, DB2 rejects the request. See step 5.<br />

Connection processing The remote request is treated as a local connection<br />

request. See steps 6 and 7.<br />

SYSUSERNAMES check DB2 searches the SYSIBM.SYSUSERNAMES table for a<br />

row that tells it how to translate the ID. If there is no<br />

such row, DB2 rejects the request. See steps 10 and 11.<br />

Inbound requests can be managed by DB2 or by RACF. If you do not specify I or<br />

B in the USERNAMES column of the SYSIBM.SYSLUNAMES table for a particular<br />

LU name, you are selecting RACF check. In this case you have to provide the<br />

proper definitions and password to RACF.<br />

Chapter 7. Security Considerations 249


Figure 175. DB2 Security Mechanism<br />

250 DataHub Implementation and Connectivity


7.3 VM Security<br />

This section presents an overview of VM security in the DRDA and DataHub<br />

environments.<br />

7.3.1 Controlling Access to the SQL/DS Application Server System<br />

The user ID and eventually the password (if the conversation level security is<br />

SECURITY=PGM) configured at DataHub/2 are sent to the VM host (AS) and<br />

verified as valid at the CP where the Service Pool machine resides. The<br />

DataHub/2 host user ID and password are valid if:<br />

• The DataHub <strong>Support</strong> host user ID exists in a VM system where DataHub<br />

<strong>Support</strong>/VM is running<br />

• The password is a valid password for the DataHub/2 host user ID (if<br />

SECURITY=PGM).<br />

NOTE: SECACPT=ALREADYV should be coded in the AVS VTAM APPL<br />

statement if you want to use either SECURITY=SAME or SECURITY=PGM for<br />

your conversation. If you specify SECACPT=CONV, only SECURITY=PGM can<br />

be used.<br />

Figure 176 shows the four inbound processing steps.<br />

Figure 176. VM Security Inbound Processing<br />

Chapter 7. Security Considerations 251


Inbound Processing<br />

1. LU Verification<br />

The first step occurs when two RDBMSs are establishing an LU6.2 session.<br />

DRDA defines the LU6.2 flows that will be used to establish the session.<br />

Session-level security is provided by specifying VERIFY=REQUIRED in the<br />

VTAM APPL statement (defining AVS) and supplying a password in the prtct<br />

field in the same APPL statement.<br />

If VERIFY=NONE, session-level security is skipped.<br />

2. AVS Validation Translation<br />

If the incoming conversation uses SECURITY=SAME, the incoming LU and<br />

user ID must be defined in the target AVS gateway. AVS also offers user ID<br />

translation services for this type of conversation-level security.<br />

3. RACF or CP Password Check<br />

If the incoming conversation uses SECURITY=PGM, the external security<br />

system will be invoked. In the case of RACF, this requires the VMEVENT of<br />

APPCPWVL to be controlled (which is the default). Otherwise, the password<br />

will be verified by CP. If SECURITY=SAME, the password is not checked,<br />

only the DataHub/2 host user ID has to be defined in VM.<br />

Note: Inbound ALLOCATE requests to DataHub <strong>Support</strong>/VM must indicate<br />

conversation security (SECURITY=PGM).<br />

4. Connection Authorization<br />

In SQL/DS, the RDBMS is a complete virtual machine, and it controls the<br />

connection as an SQL privilege.<br />

A DataHub/2 host user ID must have at least CONNECT authority to the<br />

SQL/DS databases that are supported by DataHub <strong>Support</strong>/VM.<br />

A DataHub/2 host user ID must also have various levels of authorization to<br />

execute the following functions of the DataHub <strong>Support</strong>/VM tools feature:<br />

Display Status<br />

At Host None<br />

At RDB Requires EXECUTE privilege on the Display Status<br />

package QEMQUSER.EMQVQS00 to allow DBSPACE<br />

NAME support. This is optional authority needed to get<br />

the DBSPACE NAME.<br />

Copy Data<br />

Utilities<br />

252 DataHub Implementation and Connectivity<br />

Requires authority to select from the source table,<br />

authority to insert into the target table, and EXECUTE<br />

privilege on package QEMQUSER.EMQVCS00 at both<br />

the source and the target RDB.<br />

Load Data Must be the owner of the table, have SELECT and<br />

INSERT privileges on the table, or have DBA authority.<br />

The input file may be on tape or disk, but if it is read<br />

from disk, it must be a shared file system (SFS) file.<br />

The DataHub/2 host user ID also requires read<br />

authority to the SFS file in the specified directory.


Unload Data Must have at least SELECT privilege on the table. The<br />

output file can be written to tape or disk, but if it is<br />

recorded to disk, it must be an SFS file. The DataHub/2<br />

host user ID also requires write authority to the SFS<br />

directory.<br />

Reorganize Index Must be the owner of the index, or have DBA authority.<br />

In the case of primary or unique keys, must be the<br />

owner of the table, or have DBA authority. Must have<br />

EXECUTE privilege on package<br />

QEMQUSER.EMQVCS00.<br />

7.3.2 Controlling Access to the SQL/DS Application Requester System<br />

The mechanism to control access to the SQL/DS AR system is the same as the<br />

single-site security mechanism, for example, for:<br />

• End user signon to VM<br />

• CP or RACF protection of the SQL/DS production disk that contains the AR<br />

code. Users need to link and access this disk before they can use the AR<br />

function.<br />

Outbound Processing<br />

To access a remote database the user must access the COMDIR that contains<br />

an entry for that database. The COMDIR contains information such as the source<br />

and target LU names, and type of conversation-level security to be used (for<br />

example, PGM or SAME). Each user can access up to two COMDIRs<br />

(system-level and user-level) at a given time. The user-level COMDIR is<br />

searched first.<br />

The administrator is likely to define the COMDIR to be accessed by everyone,<br />

but users can define and access their own COMDIR for connection.<br />

For SECURITY=PGM, the COMDIR allows you to specify the user ID and<br />

password that will be used for the connection. The preferred approach is to<br />

define the user ID and password in the user′s CP directory as the APPCPASS<br />

records.<br />

Preventing Users from Making Outbound Requests<br />

All remote access requires the use of AVS gateways. Therefore we can control<br />

the outbound requests by making appropriate CP directory definitions for the<br />

user′s or AR′s AVS virtual machine. For example, you can have entries in AVS<br />

that only accept connection requests from specific users, or from all users (IUCV<br />

level-security: ALLOW, ANY, or specific user IDs). Similarly, you can also have<br />

entries in the user′s directory that allow the user to connect through any of the<br />

AR′s AVS gateways, or only through specific gateways. However, if users have<br />

access to any target LUs that have sessions between the two sides, it will be up<br />

to the target site to reject the connection.<br />

Chapter 7. Security Considerations 253


7.4 AS/400 Security<br />

This section presents an overview of AS/400 security. Detailed information on<br />

AS/400 security can be found in the AS/400 Basic Security Guide (SC41-0047) and<br />

the AS/400 Security Reference (SC41-8083).<br />

AS/400 does not have an external security subsystem. All security is handled<br />

through the OS/400 operational system.<br />

The OS/400 QSECURITY system value definition establishes the security level of<br />

the AS/400 system. We recommend that you use at least level 30 in order to<br />

have user profile and password validation and control access to all objects in the<br />

system. Therefore to access any object in an AS/400 system you must have the<br />

appropriate authorization.<br />

7.4.1 User Profile and Password Validation<br />

7.4.2 QEMQUSER Authorization<br />

254 DataHub Implementation and Connectivity<br />

To use the DRDA environment, the user profile and password can be validated at<br />

remote locations, or you can implement a security mechanism that ensures that<br />

all remote accesses are controlled by the local system. It is possible to define<br />

the links and connections as secure locations, or as security already verified. In<br />

such cases you assume that the user who is entering your database had the<br />

user profile and password validated and can access your database with a default<br />

profile.<br />

Another method of validation is to specify secure location=NO. In this case the<br />

user profile and password must match a predefined user profile at the local<br />

system.<br />

This is valid for all DBMSs, so, when trying to connect to a remote database, you<br />

must use a user profile and a password valid at the remote system. Or you can<br />

define a scheme to have a default user profile at each platform, defining the<br />

communications links as secure or already verified, or you can use a translation<br />

mechanism if one is available, as is possible under DB2.<br />

The AS/400 system does not provide any outbound translation to solve the<br />

potential conflicts in the network. These conflicts must be resolved at the<br />

application server.<br />

The OS/400 Distributed Relational <strong>Database</strong> Guide (SC41-0025) has more datailed<br />

information about AS/400 security in a DRDA environment.<br />

During the installation process of DataHub <strong>Support</strong>/400 the QEMQUSER user<br />

profile is created, and this user profile is the owner of the QEMQUSER collection.<br />

We recommend that you not change the owner of the QEMQUSER collection,<br />

because QEMQUSER needs at least *OBJOPER and *ADD privileges on that<br />

collection.<br />

To connect a specific DataHub/2 workstation to an AS/400 host, you must bind<br />

the DataHub/2 workstation to the AS/400. The package name is EMQP1010, and it<br />

is located in the AS/400 QEMQUSER collection.<br />

To allow users other than QEMQUSER to access the DataHub <strong>Support</strong>/400<br />

functions it is necessary to:


7.4.3 Security Administration<br />

7.5 OS/2 Managed Host Security<br />

7.5.1 RDS Security<br />

• Start an AS/400 SQL interactive session by issuing the STRSQL command and<br />

then execute the following command:<br />

GRANT EXECUTE ON PACKAGE QEMQUSER.EMQP1010 TO USERID<br />

or<br />

• Use the AS/400 GRTOBJAUT command<br />

or<br />

• Use the OS/2 <strong>Database</strong> Manager command line interface and the grant<br />

command.<br />

The OS/400 CRTUSRPRF (Create User Profile), WRKUSRPRF (Work With User<br />

Profile), or CHGUSRPRF (Change User Profile) commands must be used to create<br />

an OS/400 user profile or to change the password. It is also possible to change a<br />

password just by going through the OS/400 menus.<br />

The AS/400 security officer has *SECADM authority and is usually responsible for<br />

managing user profiles, passwords, authorization lists, and all security-related<br />

tasks under OS/400.<br />

Authorization lists can be created to manage a group of users that need access<br />

to the same objects. We recommend that you create authorization lists to<br />

facilitate security administration.<br />

This section discusses how the inbound requests sent by the DataHub/2<br />

workstation are validated. The good news about the OS/2 managed hosts is that<br />

all RDS and tools conversation requests are validated using the OS/2 UPM<br />

facility.<br />

Section 7.1.1, “User Profile Management” on page 241 discusses how the OS/2<br />

UPM facility is used to validate the RDS request. The OS/2 managed host is a<br />

server to a DataHub/2 workstation. The DataHub/2 user most be defined in the<br />

OS/2 managed host (that is, defined in OS/2 managed host UPM) in order to<br />

access the local RDBMS. When applicable, a group should be assigned to a<br />

DataHub/2 user to help manage the OS/2 <strong>Database</strong> Manager privileges.<br />

7.5.2 Tools Conversation Security<br />

The first level of security when a request is sent from a DataHub/2 workstation to<br />

an OS/2 workstation is conversation security. Specify Same in the Security type<br />

field. Program security is enforced by DataHub/2, and the DataHub/2 user profile<br />

security information is sent along with the tools conversation request.<br />

On the OS/2 managed host side, the transaction program definition (see<br />

“Transaction Program Name” on page 196) indicates that the security<br />

information is required. However, the conversation security definition (see<br />

“Conversation Security” on page 198) indicates that the OS/2 UPM facility is<br />

used to validate the incoming request.<br />

Chapter 7. Security Considerations 255


DataHub <strong>Support</strong>/2 also uses one or more userids and passwords sent (in<br />

encrypted format) from the DataHub/2 workstation to issue logons of the process<br />

executing the DataHub <strong>Support</strong>/2 tool functions. If a user-supplied encryption<br />

routine is used at the DataHub/2 workstation then a matching user-supplied<br />

routine (which uses the same encryption algorithm) must be installed at each<br />

DataHub <strong>Support</strong>/2 host managed by the DataHub/2 workstation. Failure to have<br />

matching encryption routines will cause the DataHub <strong>Support</strong>/2 tool functions to<br />

fail various authorization checks.<br />

The userid and password for the DataHub <strong>Support</strong>/2 host must also be supported<br />

at all hosts that you intend to use in copy data operations involving this DataHub<br />

<strong>Support</strong>/2 host.<br />

7.6 Managing Password Changes<br />

The EMQCRYPT.DLL must match at both DataHub/2 and DataHub <strong>Support</strong>/2.<br />

Before discussing the problems associated with password changes, let us<br />

summarize the different locations where passwords are defined:<br />

• At the DataHub/2 workstation<br />

− UPM<br />

− UPM user profile (one per host; optional if all your userids and<br />

passwords are the same on every host)<br />

− DataHub/2 user profile ( one entry per managed host)<br />

• At the LAN server, DataHub/2 server, DDCS/2 gateway (optional)<br />

− UPM<br />

• At each managed host.<br />

As you can imagine, changing passwords every month is a major issue if you<br />

have hundreds of hosts to manage, as would be the case in a distributed OS/2<br />

environment. Naturally, one person cannot manage hundreds of host systems;<br />

dividing the responsibility among a group of people would minimize the<br />

problems. Another solution is to use one userid and password for all the<br />

database administrators and another userid and password for the help-desk<br />

personnel. Everyone should be advised when a password is changed.<br />

To summarize, if you change your password on a specific host, you will need to<br />

reflect that password change in:<br />

• OS/2 UPM<br />

• DataHub/2 user profile.<br />

256 DataHub Implementation and Connectivity<br />

Note: When you change your UPM password on the DataHub/2 workstation, you<br />

will be prompted by DataHub/2 for your old and new UPM passwords because<br />

the OS/2 UPM password is used to encrypt the DataHub/2 user profile.<br />

DataHub/2 will prompt you the first time you start DataHub/2 after an OS/2 UPM<br />

password change.


7.7 Recommendations<br />

Our principal recommendations about security are:<br />

• If you are using the LAN Server configuration or a DataHub/2 database<br />

server configuration:<br />

− Assign UPM users to groups<br />

− Grant DataHub/2 database authority to a group.<br />

One group will have administration authority (select and update) on the<br />

DataHub/2 database. Another will have user authority (select) on the<br />

DataHub/2 database.<br />

• Use the same userid and password for every host system you access.<br />

• If different combinations of userid and password are used on each of the<br />

managed hosts, we recommend that you create one OS/2 UPM logon profile<br />

entry for each host.<br />

• Use a different workstation name for each host even if you are using a<br />

DDCS/2 gateway machine to get to those hosts. Use the OS/2 <strong>Database</strong><br />

Manager workstation alias as the OS/2 UPM node name.<br />

• If your organization permits it, never change your password.<br />

• Clean up the OS/2 UPM sessions by logging off all functions. This is<br />

automatically done when you shut down your workstation.<br />

• Back up the NET.ACC file from the C:\MUGLIB\ACCOUNTS directory. The<br />

copy command should be put in the STARTUP.CMD procedure so that the<br />

NET.ACC file is copied before the LAN server or OS/2 <strong>Database</strong> Manager is<br />

started.<br />

Chapter 7. Security Considerations 257


258 DataHub Implementation and Connectivity


Appendix A. OS/2 Communications Manager Network Definition File<br />

This is the listing of the network definition file (NDF) used for the DataHub/2<br />

workstation in scenario 2. It contains all symbolic destination name definitions<br />

and the links to attach to any of the ITSC managed hosts.<br />

DEFINE_LOCAL_CP FQ_CP_NAME(USIBMSC.SJA2039I )<br />

DESCRIPTION(DataHub Control Point Workstation)<br />

CP_ALIAS(ASJA2039)<br />

NAU_ADDRESS(INDEPENDENT_LU)<br />

NODE_TYPE(EN)<br />

NODE_ID(X′05DA2039′)<br />

HOST_FP_SUPPORT(YES);<br />

DEFINE_LOGICAL_LINK LINK_NAME(OMTLDLR1)<br />

DESCRIPTION(OS/2, Montreal, Car dealer)<br />

FQ_ADJACENT_CP_NAME(USIBMSC.SJA2018I )<br />

ADJACENT_NODE_TYPE(LEARN)<br />

DLC_NAME(IBMTRNET)<br />

ADAPTER_NUMBER(0)<br />

DESTINATION_ADDRESS(X′400052047122′)<br />

CP_CP_SESSION_SUPPORT(NO)<br />

ACTIVATE_AT_STARTUP(NO)<br />

LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)<br />

LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)<br />

SOLICIT_SSCP_SESSION(NO)<br />

EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)<br />

COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)<br />

COST_PER_BYTE(USE_ADAPTER_DEFINITION)<br />

SECURITY(USE_ADAPTER_DEFINITION)<br />

PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_1(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_2(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_3(USE_ADAPTER_DEFINITION);<br />

DEFINE_LOGICAL_LINK LINK_NAME(ORIODLR1)<br />

DESCRIPTION(OS/2, Rio de Janiero, Car dealer)<br />

FQ_ADJACENT_CP_NAME(USIBMSC.SJA2019I )<br />

ADJACENT_NODE_TYPE(LEARN)<br />

DLC_NAME(IBMTRNET)<br />

ADAPTER_NUMBER(0)<br />

DESTINATION_ADDRESS(X′400052047116′)<br />

CP_CP_SESSION_SUPPORT(NO)<br />

ACTIVATE_AT_STARTUP(NO)<br />

LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)<br />

LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)<br />

SOLICIT_SSCP_SESSION(NO)<br />

EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)<br />

COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)<br />

COST_PER_BYTE(USE_ADAPTER_DEFINITION)<br />

SECURITY(USE_ADAPTER_DEFINITION)<br />

PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_1(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_2(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_3(USE_ADAPTER_DEFINITION);<br />

DEFINE_LOGICAL_LINK LINK_NAME(OPARDLR1)<br />

© Copyright IBM Corp. 1993 259


260 DataHub Implementation and Connectivity<br />

DESCRIPTION(OS/2, Paris, Car dealer)<br />

FQ_ADJACENT_CP_NAME(USIBMSC.SJA2050I )<br />

ADJACENT_NODE_TYPE(LEARN)<br />

DLC_NAME(IBMTRNET)<br />

ADAPTER_NUMBER(0)<br />

DESTINATION_ADDRESS(X′400052047157′)<br />

CP_CP_SESSION_SUPPORT(NO)<br />

ACTIVATE_AT_STARTUP(NO)<br />

LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)<br />

LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)<br />

SOLICIT_SSCP_SESSION(NO)<br />

EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)<br />

COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)<br />

COST_PER_BYTE(USE_ADAPTER_DEFINITION)<br />

SECURITY(USE_ADAPTER_DEFINITION)<br />

PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_1(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_2(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_3(USE_ADAPTER_DEFINITION);<br />

DEFINE_LOGICAL_LINK LINK_NAME(ASJDLR1 )<br />

DESCRIPTION(AS/400, San Jose, Car dealer)<br />

FQ_ADJACENT_CP_NAME(USIBMSC.SJAS400A )<br />

ADJACENT_NODE_TYPE(LEARN)<br />

DLC_NAME(IBMTRNET)<br />

ADAPTER_NUMBER(0)<br />

DESTINATION_ADDRESS(X′400052047158′)<br />

CP_CP_SESSION_SUPPORT(NO)<br />

ACTIVATE_AT_STARTUP(NO)<br />

LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)<br />

LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)<br />

SOLICIT_SSCP_SESSION(NO)<br />

EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)<br />

COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)<br />

COST_PER_BYTE(USE_ADAPTER_DEFINITION)<br />

SECURITY(USE_ADAPTER_DEFINITION)<br />

PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_1(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_2(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_3(USE_ADAPTER_DEFINITION);<br />

DEFINE_LOGICAL_LINK LINK_NAME(MNYCENT )<br />

DESCRIPTION(MVS, New York, Central distribution)<br />

FQ_ADJACENT_CP_NAME(USIBMSC.SCG20 )<br />

ADJACENT_NODE_TYPE(LEN)<br />

DLC_NAME(IBMTRNET)<br />

ADAPTER_NUMBER(0)<br />

DESTINATION_ADDRESS(X′400008210200′)<br />

CP_CP_SESSION_SUPPORT(NO)<br />

ACTIVATE_AT_STARTUP(NO)<br />

LIMITED_RESOURCE(USE_ADAPTER_DEFINITION)<br />

LINK_STATION_ROLE(USE_ADAPTER_DEFINITION)<br />

SOLICIT_SSCP_SESSION(YES)<br />

PU_NAME(SJA2039 )<br />

NODE_ID(X′05DA2039′)<br />

EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION)<br />

COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION)<br />

COST_PER_BYTE(USE_ADAPTER_DEFINITION)<br />

SECURITY(USE_ADAPTER_DEFINITION)


PROPAGATION_DELAY(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_1(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_2(USE_ADAPTER_DEFINITION)<br />

USER_DEFINED_3(USE_ADAPTER_DEFINITION);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.LUDB23 )<br />

DESCRIPTION(DB2, New York, DB2CENTDIST)<br />

PARTNER_LU_ALIAS(DBNYCENT)<br />

PARTNER_LU_UNINTERPRETED_NAME(LUDB23 )<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.LUSQLVMA )<br />

DESCRIPTION(SQL/DS, Toronto, Car dealer 1)<br />

PARTNER_LU_ALIAS(DBTORDR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(LUSQLVMA)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.SJA2018I )<br />

DESCRIPTION(OS/2, Montreal, Car dealer)<br />

PARTNER_LU_ALIAS(OMTLDLR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(SJA2018I)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.SJA2019I )<br />

DESCRIPTION(OS/2, Rio de Janiero, Car dealer)<br />

PARTNER_LU_ALIAS(ORIODLR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(SJA2019I)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.EMQMACB1 )<br />

DESCRIPTION(DHS/MVS, New York, Central Distribution)<br />

PARTNER_LU_ALIAS(DHNYCENT)<br />

PARTNER_LU_UNINTERPRETED_NAME(EMQMACB1)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.SJA2050I )<br />

DESCRIPTION(OS/2, Paris, Car dealer)<br />

PARTNER_LU_ALIAS(OPARDLR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(SJA2050I)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.SJA2054I )<br />

DESCRIPTION(AS/400, San Jose, Car dealer 1)<br />

PARTNER_LU_ALIAS(ASJDLR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(SJA2054I)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

Appendix A. OS/2 Communications Manager Network Definition File 261


262 DataHub Implementation and Connectivity<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU FQ_PARTNER_LU_NAME(USIBMSC.VDHTORD1 )<br />

DESCRIPTION(DHS/VM, Toronto, Car dealer 1)<br />

PARTNER_LU_ALIAS(DHTORDR1)<br />

PARTNER_LU_UNINTERPRETED_NAME(VDHTORD1)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

CONV_SECURITY_VERIFICATION(NO)<br />

PARALLEL_SESSION_SUPPORT(YES);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.LUDB23 )<br />

DESCRIPTION(DB2, New York, DB2CENTDIST)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SCG20 )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.LUSQLVMA )<br />

DESCRIPTION(SQL/DS, Toronto, Car dealer 1)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SCG20 )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.SJA2018I )<br />

DESCRIPTION(OS/2, Montreal, Car dealer)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SJA2018I )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.SJA2019I )<br />

DESCRIPTION(OS/2, Rio de Janiero, Car dealer)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SJA2019I )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.EMQMACB1 )<br />

DESCRIPTION(DHS/MVS, New York, Central Distribution)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SCG20 )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.SJA2050I )<br />

DESCRIPTION(OS/2, Paris, Car dealer)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SJA2050I )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.SJA2054I )<br />

DESCRIPTION(AS/400, San Jose, Car dealer 1)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SJAS400A )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_PARTNER_LU_LOCATION FQ_PARTNER_LU_NAME(USIBMSC.VDHTORD1 )<br />

DESCRIPTION(DHS/VM, Toronto, Car dealer 1)<br />

WILDCARD_ENTRY(NO)<br />

FQ_OWNING_CP_NAME(USIBMSC.SCG20 )<br />

LOCAL_NODE_NN_SERVER(NO);<br />

DEFINE_MODE MODE_NAME(#INTER )


COS_NAME(#INTER )<br />

DEFAULT_RU_SIZE(YES)<br />

RECEIVE_PACING_WINDOW(7)<br />

MAX_NEGOTIABLE_SESSION_LIMIT(8)<br />

PLU_MODE_SESSION_LIMIT(8)<br />

MIN_CONWINNERS_SOURCE(4);<br />

DEFINE_MODE MODE_NAME(IBMRDB )<br />

DESCRIPTION(Logmode for DRDA connection)<br />

COS_NAME(#CONNECT)<br />

DEFAULT_RU_SIZE(YES)<br />

RECEIVE_PACING_WINDOW(4)<br />

MAX_NEGOTIABLE_SESSION_LIMIT(32767)<br />

PLU_MODE_SESSION_LIMIT(8)<br />

MIN_CONWINNERS_SOURCE(0);<br />

DEFINE_MODE MODE_NAME(EMQMLOGM)<br />

DESCRIPTION(Mode for DHS/MVS)<br />

COS_NAME(#CONNECT)<br />

DEFAULT_RU_SIZE(YES)<br />

RECEIVE_PACING_WINDOW(4)<br />

MAX_NEGOTIABLE_SESSION_LIMIT(32767)<br />

PLU_MODE_SESSION_LIMIT(1)<br />

MIN_CONWINNERS_SOURCE(1);<br />

DEFINE_DEFAULTS IMPLICIT_INBOUND_PLU_SUPPORT(NO)<br />

DESCRIPTION(Created on 02-25-93 at 09:50a)<br />

DEFAULT_MODE_NAME(BLANK)<br />

MAX_MC_LL_SEND_SIZE(32767)<br />

DEFAULT_TP_OPERATION(NONQUEUED_AM_STARTED)<br />

DEFAULT_TP_PROGRAM_TYPE(BACKGROUND)<br />

DEFAULT_TP_CONV_SECURITY_RQD(NO)<br />

MAX_HELD_ALERTS(10);<br />

DEFINE_TP TP_NAME(EMQ2T)<br />

DESCRIPTION(DataHub <strong>Support</strong>/2)<br />

PIP_ALLOWED(NO)<br />

FILESPEC(C:\EMQ2\EMQ2T.EXE)<br />

CONVERSATION_TYPE(EITHER)<br />

CONV_SECURITY_RQD(YES)<br />

SYNC_LEVEL(EITHER)<br />

TP_OPERATION(NONQUEUED_AM_STARTED)<br />

PROGRAM_TYPE(BACKGROUND)<br />

RECEIVE_ALLOCATE_TIMEOUT(INFINITE);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SMTLDLR1)<br />

DESCRIPTION(DHS/2, Montreal, Car dealer)<br />

FQ_PARTNER_LU_NAME(USIBMSC.SJA2018I )<br />

MODE_NAME(IBMRDB )<br />

TP_NAME(EMQ2T);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SRIODLR1)<br />

DESCRIPTION(DHS/2, Rio de Janiero, Dealer 1)<br />

PARTNER_LU_ALIAS(ORIODLR1 )<br />

MODE_NAME(IBMRDB )<br />

TP_NAME(EMQ2T);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SMNYCENT)<br />

DESCRIPTION(DHS/MVS, New York, Central Distribution)<br />

Appendix A. OS/2 Communications Manager Network Definition File 263


PARTNER_LU_ALIAS(DHNYCENT )<br />

MODE_NAME(EMQMLOGM)<br />

TP_NAME(ANY);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SPARDLR1)<br />

DESCRIPTION(DHS/2, Paris, Car dealer)<br />

PARTNER_LU_ALIAS(OPARDLR1 )<br />

MODE_NAME(IBMRDB )<br />

TP_NAME(EMQ2T);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SASJDLR1)<br />

DESCRIPTION(DHS/400, San Jose, Car dealer 1)<br />

PARTNER_LU_ALIAS(ASJDLR1 )<br />

MODE_NAME(IBMRDB )<br />

TP_NAME(QDMU/QSTSC);<br />

DEFINE_CPIC_SIDE_INFO SYMBOLIC_DESTINATION_NAME(SVTORDR1)<br />

DESCRIPTION(DHS/VM, Toronto, Car dealer 1)<br />

PARTNER_LU_ALIAS(DHTORDR1 )<br />

MODE_NAME(IBMRDB )<br />

TP_NAME(EMQVRSID);<br />

START_ATTACH_MANAGER;<br />

264 DataHub Implementation and Connectivity


Glossary<br />

This glossary contains the terms and abbreviations<br />

used in the DataHub/2 user interface and publications.<br />

Each entry contains a definition of the term and often<br />

includes detailed information about how the term<br />

applies specifically to the way you use DataHub/2.<br />

A<br />

action. One of the defined tasks that DataHub/2<br />

performs. In DataHub/2, for example, an action might<br />

be to display database activity, to add authorizations,<br />

or to run JCL or SQL.<br />

action bar. The area at the top of a DataHub/2<br />

window with entries that enable a user to access the<br />

selections available in that window. For example,<br />

when you select Help on the action bar, the Help<br />

pull-down menu is displayed. See also DataHub/2<br />

window.<br />

Add function. In DataHub/2, Add is a function that<br />

adds authorizations for one or more users on one or<br />

more objects.<br />

To use the Add function, select an object from the<br />

client area, then select Add from the Actions<br />

pull-down menu or use the DataHub/2 ADD command.<br />

This function is part of the DataHub/2 tools feature.<br />

Advanced program-to-program communications<br />

(APPC). (1) An implementation of the <strong>Systems</strong><br />

Network Architecture (SNA) logical unit (LU) 6.2<br />

protocol that allows interconnected systems to<br />

communicate and share the processing of programs.<br />

See logical unit 6.2 (LU 6.2). (2) The general facility<br />

characterizing the LU 6.2 architecture and its various<br />

implementations in products. (3) Sometimes used to<br />

refer to the LU 6.2 architecture and its product<br />

implementations as a whole, or an LU 6.2 product<br />

feature in particular, such as an APPC application<br />

programming interface.<br />

alias name. A name unique within one of two or<br />

more interconnected networks that is assigned by a<br />

gateway system service control point and is used in<br />

that subnetwork to represent a network addressable<br />

unit that resides in another subnetwork. Alias names<br />

must be predefined within a gateway system service<br />

control point; if an alias is not provided, it is assumed<br />

to be the same as the real name.<br />

APPC. See Advanced Program-to-Program<br />

Communications.<br />

append. To add information to the end of an existing<br />

file.<br />

application requester (AR). The source of a request<br />

to a remote RDBMS. When an application uses SQL<br />

to access a remote relational database, the function<br />

receiving the application request must determine<br />

where the data is located and connect with the<br />

remote relational database system. In DRDA<br />

architecture terms the function that makes this<br />

connection is the application requester. See also<br />

application server.<br />

application server (AS). The target of a request from<br />

an application requester. The function that receives<br />

the requests from the application requester and<br />

forwards them to the database management system.<br />

See also application requester.<br />

application tracing. The recording of information<br />

about each processing event during the execution of<br />

an application. The contents of the trace record vary,<br />

depending on what is relevant for an event. Typically,<br />

the time when events occurred, their duration, and<br />

the way they were performed are recorded.<br />

AR. See application requester.<br />

AS. See application server.<br />

AS/400. Application System/400.<br />

authorization. A general term for the privileges and<br />

authorities that allow users to access objects and<br />

perform actions on objects in local and remote RDBs.<br />

Authorizations can be on individual objects, grouped<br />

by object type, or grouped by the user to whom they<br />

apply. See also user authorizations and object<br />

authorizations.<br />

authorization ID. The user identification of data<br />

associated with a set of privileges. An authorization<br />

ID may represent an individual, an organizational<br />

group, or a function.<br />

authorization statement. A GRANT or REVOKE<br />

statement used to modify authorizations. Additional<br />

AS/400 authorizations are accessed with the<br />

AS4GRANT and AS4REVOKE statements.<br />

B<br />

Backup. In DataHub/2, Backup is a function that<br />

saves a copy of an OS/400 or DB2 relational table in a<br />

form suitable for subsequent recovery.<br />

To use the Backup function, select an object from the<br />

client area, then select Backup from the Utilities<br />

pull-down menu, or use the DataHub/2 BACKUP<br />

command. This function is part of the DataHub/2<br />

tools feature.<br />

© Copyright IBM Corp. 1993 265


ase product. A relational database management<br />

product, that is, DB2, SQL/DS, OS/400, or OS/2<br />

<strong>Database</strong> Manager.<br />

BookManager READ. BookManager READ is an IBM<br />

licensed program used to find and read softcopy<br />

books that are available on your host or workstation.<br />

There are four versions of this application for use on<br />

different platforms (READ/MVS, READ/VM, READ/2,<br />

and READ/DOS).<br />

bufferpool. Virtual storage reserved to satisfy the<br />

buffering requirements for tables or indexes.<br />

Bufferpools are applicable to DB2 environments only.<br />

C<br />

cascading delete. The deletion of multiple<br />

authorizations after a single authorization is selected<br />

for deletion. If a user has an authorization that can<br />

be granted to other users, and it has been granted to<br />

other users, deletion of the original authorization will<br />

lead to the subsequent deletion of the authorizations<br />

that were granted. This applies to all authorizations<br />

that were granted from the original user and<br />

subsequently. For example, Erik grants an<br />

authorization to Jane, who in turn grants it to Pam<br />

and Gavin. Gavin grants it to James and Moira. If all<br />

of the users have only one instance of this<br />

authorization, deleting it from Erik will delete it from<br />

all of them.<br />

If a user has been granted an authorization by more<br />

than one other user, that authorization is retained by<br />

the user if the granting users have their<br />

authorizations deleted, providing there is at least one<br />

authorization remaining.<br />

cascading pull-down. A pull-down that is invoked<br />

from another pull-down.<br />

catalog. (1) A set of system tables maintained by<br />

OS/2 <strong>Database</strong> Manager. Catalog tables are created<br />

when the database is created and contain information<br />

about tables, views, and indexes. (2) A utility that<br />

records a database and information about the<br />

database.<br />

CCSID. See coded character set identifier.<br />

CL. See control language.<br />

client area. The part of the DataHub/2 window inside<br />

the border and below the action bar. It is the<br />

workspace containing the objects with which you can<br />

work. In DataHub/2, you can build a treelike<br />

representation of all or some of the hosts, RDBs, and<br />

objects that are configured to DataHub/2, using a<br />

series of display actions. You can then use actions to<br />

perform tasks on selected objects.<br />

code page. A particular assignment of hexadecimal<br />

identifiers to graphic characters.<br />

266 DataHub Implementation and Connectivity<br />

code point. An identifier that tells a tool (either the<br />

host or the workstation component) how to interpret<br />

the data following the code point in a tools<br />

conversation flow.<br />

coded character set identifier (CCSID). An SAA<br />

Code Page Architecture (CPA) 2-byte (unsigned)<br />

binary number that uniquely identifies an encoding<br />

scheme and one or more pairs of character sets and<br />

code pages.<br />

collapse. A CUA term relating to the table of<br />

contents of an online publication. To collapse a topic<br />

you can either use the mouse to click on the minus<br />

sign beside the topic, or you can highlight that topic<br />

and press the minus (-) key. The minus sign that<br />

appears beside the topic indicates that you can<br />

expand the topic.<br />

collection. A user-defined grouping of objects (for<br />

example, tables, views, or packages) that are<br />

contained within an RDB. In DataHub/2, the collection<br />

is the second qualifier of a fully qualified object name,<br />

rdb.collection.object, for example:<br />

POKDDB1.FINANCE.USER_TABLE<br />

In this example, FINANCE is the name of the<br />

collection for this table.<br />

command. A request to execute a particular<br />

DataHub/2 function. You can issue a DataHub/2<br />

command from an OS/2 window, a DataHub/2 window,<br />

or a C program.<br />

commit. The process that allows data in an RDB that<br />

has been changed by one application or user to be<br />

accessed by other applications or users. When a<br />

COMMIT occurs, locks are freed so that other<br />

applications can access the data that was just<br />

committed.<br />

commit frequency. The number of SQL statements to<br />

be run before a COMMIT is to be issued.<br />

Setting the commit frequency to a low number<br />

minimizes the amount of work that may need to be<br />

performed again if an SQL statement fails. Setting<br />

the commit frequency to a higher number will cause<br />

more locking to occur, restricting access to the<br />

database. The value you set should depend on the<br />

activity, size, and security of the RDBs being updated.<br />

commit point. A point in the commit process that<br />

occurs when data, changed by a user or application,<br />

is considered complete. The data is updated and all<br />

locks on the data are freed.<br />

Common Programming Interface (CPI). Definitions of<br />

those application development languages and<br />

services that have (or are intended to have)<br />

implementations on and a high degree of<br />

commonality across the SAA environments. One of<br />

the three SAA architectural areas (the other two


eing Common Communications <strong>Support</strong> and<br />

Common User Access).<br />

Common User Access (CUA). Guidelines for the<br />

dialog between a person and the workstation or<br />

terminal. One of the three SAA architectural areas<br />

(the other two being Common Programming Interface<br />

and Common Communications <strong>Support</strong>).<br />

Communications Manager. A component of OS/2<br />

Extended Services that lets a workstation connect to a<br />

host and use the host resources as well as the<br />

resources of other PWSs to which the workstation is<br />

attached, either directly or through a host.<br />

contextual help. Help that gives information about<br />

the specific item the cursor is on. The help is<br />

contextual because it provides information about a<br />

specific item as it is currently being used. Contrast<br />

with general help.<br />

continue on error. An error-handling option<br />

available for most DataHub/2 functions. When you<br />

specify this option, DataHub/2 will continue to run an<br />

application when it encounters a non-severe error (an<br />

error with a severity code less than or equal to 8).<br />

Contrast with stop on error.<br />

control language. The set of all commands with<br />

which a user requests system functions on an AS/400<br />

system.<br />

control point. The programmable workstation where<br />

DataHub/2 started executing.<br />

cooperative processing. In SAA usage, the<br />

coordinated use of a programmable workstation with<br />

one or more linked systems to accomplish a single<br />

goal.<br />

Copy Data. In DataHub/2, Copy Data is a function<br />

that is used to:<br />

• Copy basic database objects (tables, views, and<br />

indexes). This can be within one RDB or between<br />

two RDBs, one at a time or in groups.<br />

• Copy authorizations from one or more objects at<br />

a source RDB to corresponding objects at a target<br />

RDB. You can copy either:<br />

− Authorizations for all users on the object<br />

− Authorizations for selected users on the<br />

object.<br />

• Copy authorizations held by a user on objects to<br />

another user within one or between two RDBs.<br />

You can copy either:<br />

− All authorizations on all objects authorized for<br />

a user<br />

− The authorizations for one or more selected<br />

objects authorized for a user.<br />

To use the Copy Data function, select Copy from the<br />

Actions pull-down menu or use the DataHub/2 COPY<br />

command. This function is part of the DataHub/2<br />

tools feature.<br />

CPI. See Common Programming Interface.<br />

CPU time. In DataHub/2, the amount of CPU time<br />

used by the RDBMS work unit selected from a display<br />

request since it began processing.<br />

The time is in the format specified for the OS/2<br />

system on which DataHub/2 is running. The time<br />

format has three decimal places for milliseconds<br />

separating the time by the decimal separator<br />

specified for the OS/2 system.<br />

For example, if the OS/2 system uses the JIS time<br />

format with a comma decimal separator, the time will<br />

appear as:<br />

1 day 01:13:14,138<br />

If the time is less than a day, it will appear as:<br />

12:54:32,002<br />

If the time is greater than 30 days, it will appear as:<br />

>30 days 12:54:32,002<br />

CUA. See Common User Access.<br />

customize. To change the system characteristics or<br />

defaults to meet user or site requirements.<br />

D<br />

DASD. See direct access storage device.<br />

data control language. SQL language statements<br />

used to control user authorization to relational<br />

database objects.<br />

data definition language. SQL language statements<br />

used to create definitions of relational database<br />

objects.<br />

data integrity. (1) The condition that exists as long<br />

as accidental or intentional destruction, alteration, or<br />

loss of data does not occur. (2) Preservation of data<br />

for its intended use. See also integrity.<br />

data link control. (1) In <strong>Systems</strong> Network<br />

Architecture, the protocol layer that consists of the<br />

link stations that schedule data transfer over a link<br />

between two nodes and perform error control for the<br />

link. (2) In Communications Manager, a profile<br />

containing parameters for a communication adapter.<br />

database. In DB2, a collection of table spaces and<br />

index spaces.<br />

database administrator (DBA). An individual or<br />

group responsible for the design, development,<br />

operation, safeguarding, maintenance, and use of a<br />

database. DBA is the highest authority level an<br />

SQL/DS user can have.<br />

Glossary 267


database catalog. A database table that defines the<br />

database, for example, authorizations, structure,<br />

views.<br />

<strong>Database</strong> Manager. A component of OS/2 Extended<br />

Services consisting of <strong>Database</strong> Services, Query<br />

Manager, and Remote Data Services. <strong>Database</strong><br />

Manager is based on the relational model of data and<br />

allows users to create, update, and access databases.<br />

DATABASE 2 (DB2). A program that provides a<br />

full-function relational database management system<br />

on MVS and supports access from MVS applications<br />

running under IMS, CICS, TSO, or batch<br />

environments.<br />

DataHub. IBM SystemView Information Warehouse<br />

DataHub. This refers to the overall family of DataHub<br />

components including:<br />

• IBM SystemView Information Warehouse<br />

DataHub/2 (DataHub/2)<br />

• IBM SystemView Information Warehouse DataHub<br />

<strong>Support</strong>/MVS (DataHub <strong>Support</strong>/MVS)<br />

• IBM SystemView Information Warehouse DataHub<br />

<strong>Support</strong>/VM (DataHub <strong>Support</strong>/VM)<br />

• IBM SystemView Information Warehouse DataHub<br />

<strong>Support</strong>/2 (DataHub <strong>Support</strong>/2)<br />

• IBM SystemView Information Warehouse DataHub<br />

<strong>Support</strong>/400 (DataHub <strong>Support</strong>/400).<br />

DataHub platform feature. The platform feature of<br />

DataHub provides you with the ability to display<br />

objects in all your managed databases and run<br />

dynamic SQL and JCL. It provides the base on which<br />

you can install the tools feature or tools from other<br />

sources.<br />

DataHub tools feature. The tools feature of DataHub<br />

provides the ability to copy data between host<br />

databases, manage authorizations at those<br />

databases, and query the status of database requests.<br />

It also provides such utilities as load, unload, and<br />

backup host databases.<br />

DataHub <strong>Support</strong>. IBM SystemView Information<br />

Warehouse DataHub <strong>Support</strong>. This term is used to<br />

refer to the general collection of host DataHub<br />

components. The following components are the<br />

“managed host” components that are managed from<br />

the DataHub/2 control point:<br />

• DataHub <strong>Support</strong>/MVS<br />

• DataHub <strong>Support</strong>/VM<br />

• DataHub <strong>Support</strong>/400<br />

• DataHub <strong>Support</strong>/2.<br />

DataHub/2. The workstation component of DataHub<br />

running under OS/2.<br />

268 DataHub Implementation and Connectivity<br />

DataHub/2 database. The database that contains the<br />

DataHub/2 internal tables, which store information<br />

such as supported host names and additional tool<br />

information.<br />

DataHub <strong>Support</strong>/2. The host component of DataHub<br />

for the OS/2 environment.<br />

DataHub <strong>Support</strong>/400. The host component of<br />

DataHub for the OS/400 environment.<br />

DataHub <strong>Support</strong>/MVS. The host component of<br />

DataHub for the MVS environment.<br />

DataHub <strong>Support</strong>/VM. The host component of<br />

DataHub for the VM environment.<br />

DataHub/2 window. The primary window of<br />

DataHub/2, which is the first window you see after<br />

you have logged on to DataHub/2. The main<br />

components of the DataHub/2 window are the action<br />

bar at the top of the window and the client area below<br />

the action bar. See also action bar and client area.<br />

DataHub/2 workstation. The programmable<br />

workstation where DataHub/2 is running.<br />

DBA. See database administrator.<br />

DBCS. See double-byte character set.<br />

dbspace. A logical allocation of space in a storage<br />

pool contained in a database. Contains one or more<br />

tables and their associated indexes. Dbspaces are<br />

applicable to SQL/DS environments only.<br />

DB2. See DATABASE 2.<br />

DDCS/2. See Distributed <strong>Database</strong> Connection<br />

Services/2.<br />

DDS. See Distributed Data Services.<br />

default value. A value used by DataHub/2 when you<br />

have not specified a value. This value can be set by<br />

the system and overridden by you.<br />

Delete function. In DataHub/2, Delete is a function<br />

that is used to delete authorizations for one or more<br />

users on one or more objects.<br />

To use the Delete function, select Delete from the<br />

Actions pull-down menu or use the DataHub/2 DELETE<br />

command. This function is part of the DataHub/2<br />

tools feature.<br />

delimited ASCII (DEL) file format. A DEL file is a<br />

sequential ASCII file with row and column delimiters.<br />

Each DEL file is a stream of ASCII characters<br />

consisting of cell values ordered by row and then by<br />

column.


direct access storage device (DASD). A device for<br />

storing data in which access time is effectively<br />

independent of the location of the data.<br />

Display function. In DataHub/2, there are two<br />

functional features of the Display function:<br />

• To display objects on the client area on the<br />

DataHub/2 window. This function is part of the<br />

DataHub/2 platform feature.<br />

• To obtain information about database activity at<br />

any part of the network of systems within your<br />

company. The information can be used to assist<br />

in identifying problems arising in the distributed<br />

database network. This function of Display is part<br />

of the DataHub/2 tools feature and is referred to<br />

as ″Display Status.″<br />

Whether you are displaying objects or database<br />

activity, the same Display action is selected from the<br />

Actions pull-down and the same DISPLAY command is<br />

used.<br />

To use the Display function, select Display from the<br />

Actions pull-down menu or use the DataHub/2<br />

DISPLAY command.<br />

Display Status function. See Display function.<br />

distributed data. Data that is distributed across<br />

multiple RDBMSs in a network but appears to users<br />

as a logical whole, locally accessible.<br />

Distributed Data Services (DDS). The OS/2<br />

implementation of Distributed Data Management<br />

(DDM). DDM (and DDS in OS/2) is a prerequisite for<br />

Distributed Relational <strong>Database</strong> Architecture (DRDA).<br />

distributed database. A database that appears to<br />

users as a logical whole, locally accessible, but is<br />

composed of databases in multiple RDBMSs.<br />

Distributed <strong>Database</strong> Connection Services/2. A<br />

software connection between a database client on a<br />

workstation and a database on a host.<br />

distributed environment. An environment in which an<br />

organization′s data is distributed among different<br />

computing environments that may be in different<br />

geographical locations. A common characteristic of<br />

distributed environments is a distributed database.<br />

distributed processing. Processing that involves<br />

resources or functions dispersed among two or more<br />

interconnected processors.<br />

Distributed Relational <strong>Database</strong> Architecture. A<br />

connection protocol for distributed relational database<br />

processing that is used by IBM′s relational database<br />

products. DRDA consists of protocols for<br />

communication between an application and a remote<br />

RDBMS, and for communication between RDBMSs.<br />

distributed relational database system. A system<br />

involving multiple locations connected together in a<br />

communications network, in which each location may<br />

be one or more relational database systems. In a<br />

distributed relational database management system,<br />

a user at one location can access any data in the<br />

network as if the data were stored at the user′s<br />

location.<br />

DLC. See data link control.<br />

DLL. See dynamic link library.<br />

double-byte character set. A set of characters in<br />

which each character occupies two bytes. Languages<br />

such as Japanese, Chinese, and Korean, which<br />

contain more symbols than can be represented by 256<br />

code points, require double-byte character sets.<br />

Entering, displaying, and printing DBCS characters<br />

requires special hardware and software support.<br />

DRDA. See Distributed Relational <strong>Database</strong><br />

Architecture.<br />

dynamic link library. A file containing a dynamic link<br />

routine that is linked at load or run time.<br />

E<br />

elapsed time. In DataHub/2, the total time since the<br />

RDBMS work unit selected for a display request<br />

began processing.<br />

The time is in the format specified for the OS/2<br />

system on which DataHub/2 is running. The time<br />

format has three decimal places for milliseconds<br />

separating the time by the decimal separator<br />

specified for the OS/2 system.<br />

For example, if the OS/2 system uses the JIS time<br />

format with a comma decimal separator, the time will<br />

appear as:<br />

1 day 01:13:14,138<br />

If the time is less than a day, it will appear as:<br />

12:54:32,002<br />

If the time is greater than 30 days, it will appear as:<br />

>30 days 12:54:32,002<br />

entry field. An area of the window where you can<br />

type information. Its boundaries are usually<br />

indicated. See also selection field.<br />

expand. A CUA term relating to the table of<br />

contents of an online publication. To expand a topic,<br />

you can either click on the plus sign beside the topic<br />

or highlight the topic and press the plus (+) key. The<br />

minus sign that appears beside the topic indicates<br />

that you can collapse the topic.<br />

extended help. See general help.<br />

Glossary 269


F<br />

foreign key. A key that is specified in the definition<br />

of a referential constraint. A table with a foreign key<br />

is a dependent table. The key must have the same<br />

number of columns, with the same descriptions, as<br />

the primary key of the parent table.<br />

from requester type requester name. In DataHub/2,<br />

this phrase is applicable when the unit of work is<br />

servicing a distributed request. It indicates the type<br />

and name of the system that is acting as a requester<br />

to the serving work unit.<br />

The valid values for the requester type are:<br />

• DB2 vv.rr.m<br />

• SQL/DS vv.rr.m<br />

• OS/400 vv.rr.m<br />

• OS/2 <strong>Database</strong> Manager vv.rr.m<br />

Where:<br />

vv.rr.m indicates the version (vv), release (rr),<br />

and modification level (m) of the requester.<br />

See also RDBMS work unit, serving work, and on<br />

server type server name.<br />

function. In DataHub/2, function is a general term<br />

used to describe both the action you can perform<br />

using the DataHub/2 window and the command you<br />

can run from a command line.<br />

G<br />

gateway. A functional unit that connects two<br />

computer networks of different network architectures.<br />

general help. Help that provides information about<br />

the contents of the application window from which you<br />

requested help. Contrast with contextual help.<br />

grant. Gives authority to a user ID or group ID.<br />

H<br />

host. A computer that usually performs network<br />

control functions and provides end users with<br />

services such as computation and database access.<br />

The host environments that are supported in DataHub<br />

are MVS, OS/400, OS/2, and VM.<br />

I<br />

IBM Extended Services for OS/2. An IBM licensed<br />

program that contains the <strong>Database</strong> Manager and<br />

Communications Manager components and uses a<br />

base operating system equivalent to the IBM<br />

Operating System/2 Standard Edition Version 1.30.1.<br />

270 DataHub Implementation and Connectivity<br />

IBM Structured Query Language/400. An IBM<br />

licensed program that provides a set of commands to<br />

access and manipulate data on an OS/400 system.<br />

Applications written using SQL/400 can be easily<br />

transported to other IBM systems.<br />

implication message. Extra information given to a<br />

user before executing a copy object operation, or an<br />

add, delete, or copy authorization operation. The<br />

purpose of an implication message is to alert the user<br />

to an important consequence of the execution of the<br />

operation.<br />

index. A set of pointers that are logically ordered by<br />

the values of a key. Indexes provide quick access to<br />

data and can optionally enforce uniqueness on the<br />

rows in a table. A fully qualified index name is of the<br />

form rdb.collection.object.<br />

Information Presentation Facility (IPF). The OS/2<br />

programming tool used to develop help information<br />

for DataHub/2.<br />

integrity. The condition that exists when data,<br />

systems, and programs are protected from<br />

inadvertent or malicious destruction or alteration.<br />

See also data integrity.<br />

IPF. See Information Presentation Facility.<br />

J<br />

JCL. See job control language.<br />

JES. See job entry system.<br />

job control language (JCL). A control language used<br />

to identify an MVS job to an operating system and to<br />

describe the job′s requirements.<br />

job entry system (JES). An MVS job administration<br />

system. MVS jobs are submitted through JES, which<br />

controls the job execution and produces an output of<br />

the job results.<br />

job name. The identifier of the job associated with<br />

the RDBMS work unit selected for the display details<br />

request. Job name is applicable to OS/400 and MVS<br />

environments only.<br />

K<br />

• OS/400—Job name is the fully qualified job<br />

identifier. See also SYSID (OS/400).<br />

• MVS—Job name corresponds to the DB2<br />

correlation identifier.<br />

keyword. A part of a DataHub/2 command parameter<br />

that is shown in uppercase letters in the syntax<br />

diagram. See also parameter.


L<br />

LAN. See local area network.<br />

like. Refers to two or more instances of the same<br />

RDBMS, for example, two instances of SQL/DS.<br />

Contrast with unlike.<br />

like environments. A network of similar RDBMSs<br />

that provide access to data residing at any of the<br />

locations containing a participating instance of the<br />

RDBMS. Contrast with unlike environments.<br />

Load function. In DataHub/2, Load is a function that<br />

inserts or replaces rows in an existing relational<br />

table, or view (OS/2), from data contained in existing<br />

sequential files at any RDBMS.<br />

To use the Load function, select an object from the<br />

client area, then select Load from the Utilities<br />

pull-down menu or use the DataHub/2 LOAD<br />

command. This function is part of the DataHub/2<br />

tools feature.<br />

local area network (LAN). A network of devices that<br />

are connected to one another for communication. A<br />

local area network can be connected to a larger<br />

network. See also wide area network.<br />

local identifier. In DataHub/2, the unique identifier<br />

assigned to the RDBMS work unit at the system<br />

selected for the display request. The local identifier<br />

is the system-specific identifier (SYSID). See also<br />

SYSID (MVS), SYSID (OS/400), and SYSID (VM)..<br />

local system. The RDBMS in the user′s processor.<br />

Contrast with remote system.<br />

local work. In DataHub/2, local work is a unit of work<br />

that does not use distributed database services. The<br />

request and the processing of the request both<br />

happen at the same database. DataHub/2 displays<br />

local units of work under the heading “local work.”<br />

The definition of a local unit of work is different for<br />

each host environment and query that is requested at<br />

a host or RDB:<br />

• Display at RDB:<br />

− OS/400—A unit of work is classified as local<br />

when an application is running at the selected<br />

RDB and is accessing relational data at that<br />

same RDB.<br />

− MVS—A unit of work is classified as local<br />

when an application is running at the selected<br />

RDB and is accessing relational data at that<br />

same RDB.<br />

− VM—In SQL/DS the unit of work is never<br />

classified as local. Units of work on VM are<br />

classified as either requesting or serving to<br />

allow DataHub/2 to show the relationship<br />

between the SQL/DS requester and the<br />

SQL/DS database virtual machine.<br />

• Display at host:<br />

− OS/400—Only one RDB can reside at an<br />

OS/400 host. A unit of work is classified as<br />

local when an application is running at the<br />

host currently being processed by DataHub/2<br />

and accessing data at the RDB located at the<br />

host.<br />

− MVS—One or more RDBs can reside at an<br />

MVS host. DataHub/2 provides information<br />

for each RDB. Therefore, a unit of work is<br />

classified as local when an application is<br />

running at the RDB currently being processed<br />

by DataHub/2 and accessing relational data at<br />

that same RDB.<br />

− VM—In SQL/DS the unit of work is never<br />

classified as local. Units of work on VM are<br />

classified as either requesting or serving to<br />

allow DataHub/2 to show the relationship<br />

between the SQL/DS requester and SQL/DS<br />

database system.<br />

location. In DataHub/2, a location is either the name<br />

of a host or a relational database. The location is<br />

represented by an identifier of 1 to 18 characters.<br />

lock. A mechanism for controlling access to a<br />

database object to maintain that object′s integrity.<br />

See also LOCKID.<br />

LOCKID. Lock identifier. An alphanumeric string of 1<br />

to 73 characters that allows the host to identify an<br />

object on the host system on which locks are held or<br />

requested. LOCKIDs are system specific and defined<br />

by each host. See also LOCKID (MVS), LOCKID<br />

(OS/400), and LOCKID (VM).<br />

LOCKID (MVS). The lock identifier (LOCKID) for an<br />

MVS system is made up of a DB2 internal lock name<br />

and a DB2 internal hash token. It is of the form:<br />

lock_name.hash_token<br />

Where:<br />

lock_name Is a character representation, from<br />

the set 0-9 and A-F, of a DB2<br />

internal lock name representing the<br />

hexadecimal values in LOCKID.<br />

.(period) Is a one-character delimiter.<br />

hash_token Is a character representation, from<br />

the set 0-9 and A-F, of a DB2<br />

internal hash token representing<br />

the hexadecimal values in LOCKID.<br />

There is one exception: for a DB2 internal system<br />

lock or latch, in which case the LOCKID is SYSLOCK<br />

or LATCH, respectively.<br />

LOCKID (OS/400). The lock identifier (LOCKID) for an<br />

OS/400 system can be one of these formats:<br />

Glossary 271


lib/obj*type<br />

lib/file(member)<br />

lib/file(member)*ACCPTH<br />

lib/file(member)*DATA<br />

lib/file(member)recno<br />

Where:<br />

lib Is the object library, which is from 1 to<br />

10 characters long.<br />

/ Is a one-character separator between<br />

the library and object name.<br />

obj Is the object name, which is from 1 to<br />

10 characters long.<br />

*type Is the object type, which is from 1 to 7<br />

characters long. The delimiter<br />

(asterisk) separates the object name<br />

from the object type.<br />

file Is the file name, which is from 1 to 10<br />

characters long.<br />

( Is a one-character separator, opening<br />

parenthesis, between the file and<br />

member names.<br />

member Is a file member name, which is from<br />

1 to 10 characters long.<br />

) Is a one-character separator, closing<br />

parenthesis, between the member<br />

name and the last part of the LOCKID.<br />

*ACCPTH Is a constant that indicates that<br />

database file members with<br />

associated access paths (indexes)<br />

have locks on the access path.<br />

*DATA Is a constant that indicates that<br />

database file members have a lock on<br />

a control block.<br />

recno Is 1 to 10 characters long from the set<br />

0-9, representing a relative record<br />

number.<br />

LOCKID (VM). The lock identifier (LOCKID) for a VM<br />

system is a dbspace number made up of 1 to 5 digits,<br />

from the set 0-9, that does not contain any leading<br />

zeros or blanks and no trailing blanks.<br />

The exception is for database locks. In this case,<br />

LOCKID is made up of two characters, DB.<br />

lock mode. In DataHub/2, the lock attribute that<br />

specifies the type of access that concurrently running<br />

programs can have to a resource that has been<br />

locked by the RDBMS work unit selected for display<br />

lock request. For example, a lock mode of S for share<br />

indicates the lock owner and any concurrent<br />

application processes can read, but not change, the<br />

locked data. The exact meaning of lock modes may<br />

differ for each host due to differences in the RDBMS.<br />

272 DataHub Implementation and Connectivity<br />

lock object name. In DataHub/2, the name of the<br />

object on which a lock is being requested or held by<br />

the RDBMS work unit selected for a display lock<br />

request.<br />

lock type. In DataHub/2, the type of database object<br />

(for example, dbspace, table space, or index key) on<br />

which the lock is being requested or held by the<br />

RDBMS work unit selected for a display lock request.<br />

logical unit. (1) A port through which an end user<br />

accesses the SNA network in order to communicate<br />

with another end user and through which the end user<br />

accesses the functions provided by system services<br />

control points. (2) A type of network addressable unit<br />

that enables end users to communicate with each<br />

other and gain access to network resources.<br />

logical unit 6.2. A port through which an application<br />

in a distributed processing environment accesses the<br />

SNA network to communicate with another application<br />

using the SNA general data stream (which is a<br />

structured data system).<br />

LU. See logical unit.<br />

LU name. A 1- to 8-character name of a logical unit<br />

within an SNA network. A logical unit is a port<br />

through which a user communicates with another user<br />

over an SNA network.<br />

LU 6.2. See logical unit 6.2<br />

LUW. Logical unit of work. See also unit of work,<br />

and RDBMS work.<br />

LUWID. Logical unit of work identifier. A unique<br />

identifier for a distributed unit of work that can be<br />

used to access it at both the application requester or<br />

application server from the display RDBMS work<br />

action. The LUWID identifies the components of a<br />

unit of work. It consists of:<br />

• A netname, which is a 1- to 8-character network<br />

ID.<br />

• An LU name, which is a 1- to 8-character name of<br />

a logical unit within an SNA network. A logical<br />

unit is a port through which a user communicates<br />

with another user over an SNA network.<br />

• An LUW instance, which is 12 hexadecimal<br />

characters long. With the netname and LU name,<br />

the LUW instance provides a unique identifier for<br />

a unit of work.<br />

• Period (.) separator between the parts of the<br />

LUWID.<br />

An example of an LUWID is:<br />

SAN_JOSE.STL.0000011F6C22<br />

LUW instance. Twelve hexadecimal characters that,<br />

with the netname and LU name, provide a unique<br />

identifier of a unit of work.


M<br />

map. To copy information in one form to another.<br />

For example, to copy data contained in an input file<br />

on a disk to tables in a database.<br />

Multiple Virtual Storage (MVS). Multiple Virtual<br />

Storage, consisting of MVS/System Product Version 1<br />

and the MVS/370 Data Facility Product operating on a<br />

System/370 processor.<br />

MVS. See Multiple Virtual Storage.<br />

N<br />

national language support (NLS). <strong>Support</strong> in a<br />

product that allows direct interaction in the language<br />

and code pages of the customer′s country.<br />

navigation path. The path taken by a user when<br />

using the DataHub/2 window, to reach an object using<br />

a series of display actions.<br />

netname. A 1- to 8-character name identifying a<br />

network. An example of a netname is:<br />

SAN_JOSE<br />

Networking Services/2. An IBM program that<br />

requires OS/2 Communications Manager and provides<br />

Advanced peer-to-peer networking (APPN) services.<br />

NLS. See National Language <strong>Support</strong>.<br />

node. A host environment instance (an MVS, VM,<br />

OS/400, or OS/2 system) in which a DataHub<br />

component is running. A node can have one or more<br />

DRDA1 requesters and/or server components.<br />

Non-delimited ASCII (ASC) file format. As ASC file is<br />

a sequential ASCII file with row delimiters used to<br />

exchange data with any ASCII product. Each ASC file<br />

is a stream of ASCII characters consisting of data<br />

values organized by row and column.<br />

nonlabeled tape. A tape that has no labels. Tape<br />

marks are used to indicate the end of the volume and<br />

the end of each data file.<br />

number of locks. In DataHub/2, the number of locks<br />

that the RDBMS work unit selected for a display locks<br />

request is waiting for or holding.<br />

O<br />

object authorizations. In DataHub/2, the<br />

authorizations associated with an object, such as a<br />

table, view, or package. Object authorizations can be<br />

copied from one object to another of the same object<br />

type. Contrast with user authorizations.<br />

object generators. The mechanism used to generate<br />

a list of object occurrences for an object type. The<br />

object occurrences may have been retrieved either:<br />

• By a specific program<br />

or<br />

• By executing SQL at a remote RDB.<br />

A tool that returns objects to the DataHub/2 window is<br />

regarded as an object generator, for example, the<br />

DataHub Display tool.<br />

object occurrences. A component of a database<br />

environment. In DataHub/2, an example of an object<br />

occurrence might be a relational database named<br />

RCHAS333 that appears under the RDB object type in<br />

the client area. Object occurrences are displayed<br />

under the name and icon of the object type. See also<br />

object type.<br />

object type. A type of object associated with a<br />

database environment. In DataHub/2, examples of<br />

object types are databases, tables, and views. Object<br />

types are displayed in the client area above the<br />

object occurrences they represent. See also object<br />

occurrences.<br />

OEM. original equipment manufacturer.<br />

omit. To remove objects that you are not working<br />

with when you are working in the client area.<br />

on server type server name. In DataHub/2, this<br />

phrase is applicable when the unit of work is servicing<br />

a distributed request. It indicates the type and name<br />

of the system that is acting as a server to the<br />

requesting work unit.<br />

The valid values for the server type are:<br />

• DB2 vv.rr.m<br />

• SQL/DS vv.rr.m<br />

• AS/400 vv.rr.m<br />

Where:<br />

vv.rr.m indicates the version (vv), release (rr),<br />

and modification level (m) of the server.<br />

See also RDBMS work unit, requesting work, and<br />

from requester type requester name.<br />

Operating System/2 (OS/2). An IBM licensed<br />

program that can be used as the operating system for<br />

PWSs. The OS/2 licensed program can perform<br />

multiple tasks simultaneously.<br />

Operating System/400 (OS/400). An IBM licensed<br />

program that can be used as the operating system for<br />

the AS/400 system.<br />

optimizer statistics. The statistics used by the<br />

RDBMS optimizer to determine the best access paths<br />

to data.<br />

OS/2. See Operating System/2.<br />

Glossary 273


OS/400. See Operating System/400.<br />

OS/2 ES. See IBM Extended Services for OS/2.<br />

output field. A field in a device file in which data can<br />

be modified by the program and sent to the device<br />

during an output operation.<br />

P<br />

package. The control structure produced when the<br />

SQL statements in an application program are bound<br />

to an RDBMS. The RDBMS uses the control structure<br />

to process SQL statements encountered during<br />

statement execution.<br />

package section. Identifies the section number within<br />

the package assigned to the currently running SQL<br />

statement. In DataHub/2, the package section is<br />

associated with the RDBMS work unit selected for a<br />

display details request at an SQL/DS RDB.<br />

pane. One of the separate areas within a split<br />

window. Panes can be sized by the user; resizing of<br />

the panes does not take place, however, until either<br />

the mouse button is released, or the enter key has<br />

been pressed. Horizontal scrolling is independent<br />

within each pane, but vertical scrolling is across the<br />

two panes.<br />

pane key. A key, or combination of keys, that can be<br />

used to perform a function at a pane.<br />

parameter. A keyword, a variable, or a combination<br />

of keywords and variables used in conjunction with a<br />

command to affect its result. In DataHub/2 command<br />

syntax, required parameters are displayed on the<br />

main path of the syntax diagram, and optional<br />

parameters are displayed below the main path. The<br />

default parameter is underlined.<br />

participating instance. An RDBMS that is part of a<br />

distributed relational database system.<br />

peer node. An end point of a communications link or<br />

a junction common to two or more links in a network.<br />

Nodes can be processors, communication controllers,<br />

cluster controllers, terminals, or workstations. Nodes<br />

can vary in routing and other functional capabilities.<br />

Personal Computer/Integrated Exchange Format<br />

(PC/IXF) file format. IMPORT accepts only PC/IXF<br />

files, not host IXF files. PC/IXF is a structured<br />

description of a database table that contains an<br />

external representation of the internal table.<br />

physical unit. A set of protocols and services<br />

defined by <strong>Systems</strong> Network Architecture for<br />

communication between application programs.<br />

plan. In DataHub/2, a group of one or more<br />

packages. Plans are applicable to DB2 environments<br />

only.<br />

274 DataHub Implementation and Connectivity<br />

pop-up window. A movable window, fixed in size, in<br />

which you provide information required by DataHub/2<br />

so that it can continue to process your request.<br />

Presentation Manager. The interface of the OS/2<br />

program that presents a graphics-based interface to<br />

applications and files installed and running on the<br />

OS/2 program.<br />

primary key. A unique, non-null key that is part of<br />

the definition of a table. A table cannot be defined as<br />

a parent if it does not have a primary key.<br />

privilege. The right or authority to access a specific<br />

database object in a specified way.<br />

privilege hierarchy. A structure that shows the<br />

authorizations granted to users, and the users who<br />

granted them.<br />

profile. A DataHub/2 profile is linked to your OS/2<br />

user ID and identifies the host user IDs and<br />

passwords that are to be used when that profile is<br />

active.<br />

programmable workstation (PWS). A workstation<br />

that has some degree of processing capability and<br />

that allows a user to change its functions.<br />

prompt. A symbol or action that requests you to<br />

enter information.<br />

PU. See physical unit.<br />

pull-down menu. A list of choices extending from a<br />

selected action-bar choice that gives you access to<br />

actions, routings, and settings related to an object.<br />

push button. A rectangle with text inside. Push<br />

buttons are used in windows for actions that occur<br />

immediately when the push button is selected.<br />

PWS. See programmable workstation.<br />

Q<br />

qualifier. In DataHub/2, the specification of some<br />

criteria that can be used to limit the number of object<br />

occurrences of an object type that are to be<br />

displayed.<br />

R<br />

radio button. A circle with text beside it. Radio<br />

buttons are combined to show a user a fixed set of<br />

choices from which only one can be selected. The<br />

circle is partially filled when a choice is selected.<br />

RDB. See relational database.


RDB name. The DRDA unique identifier for a<br />

database within a network.<br />

RDBMS. See relational database management<br />

system.<br />

RDBMS work. The database component of a running<br />

application. When a program is run against an RDB,<br />

the RDB creates a logical unit of work on the user′s<br />

behalf.<br />

In DataHub/2, RDBMS work is a display request using<br />

the DataHub/2 window that provides a summary list of<br />

the units of work (local, requesting, or serving work)<br />

executing at a host or RDB.<br />

When using the DataHub/2 DISPLAY command, the<br />

equivalent function is RDBMS WORK SHORT. See<br />

also local work, requesting work, or serving work.<br />

RDBMS_WORK DEPENDENT UNITS. In DataHub/2,<br />

RDBMS_WORK DEPENDENT UNITS is the DataHub/2<br />

DISPLAY command that provides a list of the<br />

dependent RDBMS work units for a lock held,<br />

requested, or waited for by another RDBMS work unit.<br />

You can determine the units of work currently running<br />

that are affecting or being affected by the locks that a<br />

unit of work is requesting.<br />

When using the DataHub/2 window, the equivalent<br />

Display Status function is Display Dependent RDBMS<br />

work.<br />

RDBMS_WORK DETAILS. In DataHub/2,<br />

RDBMS_WORK DETAILS is the DataHub/2 DISPLAY<br />

command that provides detailed status and<br />

application information for a specific unit of work.<br />

When using the DataHub/2 window, the equivalent<br />

Display Status function is Display Details.<br />

RDBMS_WORK LOCKS. In DataHub/2,<br />

RDBMS_WORK LOCKS is the DataHub/2 DISPLAY<br />

command that provides a list of the locks held,<br />

requested, or waited for by the specified RDBMS work<br />

units.<br />

When using the DataHub/2 window, the equivalent<br />

Display Status function is Display Locks.<br />

RDBMS work needing lock. In DataHub/2, RDBMS<br />

work needing lock is the result of a display request<br />

when selecting the related object type of Dependent<br />

RDBMS work using the DataHub/2 window. It<br />

provides a list of the units of work that are holding or<br />

requesting locks on an object or objects. You can<br />

determine the units of work currently running that are<br />

affecting or being affected by the locks that a unit of<br />

work is requesting.<br />

When using the DataHub/2 DISPLAY command, the<br />

equivalent function is RDBMS_WORK DEPENDENT<br />

UNITS.<br />

RDBMS_WORK SHORT. In DataHub/2,<br />

RDBMS_WORK SHORT is the DataHub/2 display<br />

command that provides a summary list of the RDBMS<br />

work units that match the qualifier or qualifiers that<br />

you have specified.<br />

When using the DataHub/2 window, the equivalent<br />

Display Status function is Display RDBMS work.<br />

RDBMS_WORK SQL STATEMENT. In DataHub/2,<br />

RDBMS_WORK SQL STATEMENT is the DISPLAY<br />

command that provides the SQL statements for the<br />

selected RDBMS work units (local, requesting, or<br />

serving work). RDBMS work SQL statement is<br />

applicable to OS/400 and MVS environments only.<br />

When using the DataHub/2 window, the equivalent<br />

Display Status function is Display SQL statement.<br />

RDBMS work unit. In DataHub/2, RDBMS work is a<br />

recoverable sequence of operations that is sometimes<br />

referred to as a logical unit of work. The RDBMS<br />

ensures the consistency of the data by verifying that<br />

either all the data changes made during a unit of work<br />

are performed, or none of them is performed.<br />

The RDBMS work unit is displayed as either local<br />

work, requesting work, or serving work.<br />

RDS. See remote data services.<br />

Recover function. In DataHub/2, Recover is a<br />

function that retrieves a backup copy of a table and<br />

applies log or journal changes to return the table to<br />

its current state at a distributed OS/400 or DB2<br />

system.<br />

To use the Recover function select an object from the<br />

client area and select Recover from the Utilities<br />

pull-down menu, or you can use the DataHub/2<br />

RECOVER command. This function is part of the<br />

DataHub/2 tools feature.<br />

referential constraint. The relationship between two<br />

tables that controls the association of data between<br />

them, ensuring consistency. Referential constraint<br />

ensures that references from a parent table to a<br />

related table will give valid data.<br />

refresh. An action that updates the information at<br />

which you are currently looking. The contents of the<br />

window can be updated to reflect the current status of<br />

the information.<br />

related object type. In DataHub/2, an object type that<br />

can be navigated to from a selected object. The<br />

relationships between objects determine which<br />

objects can be displayed with other objects. For<br />

example, tables are related to views and indexes and<br />

therefore you can navigate to a view or an index from<br />

a table.<br />

relational database (RDB). An object consisting of a<br />

relational database management system catalog and<br />

all the relational database objects it describes. A<br />

relational database′s implementation depends on its<br />

SAA context:<br />

Glossary 275


MVS A DB2 subsystem instance.<br />

VM An SQL/DS server machine.<br />

OS/400 An occurrence of the OS/400 operating<br />

system. The OS/400 host name and the<br />

RDB name can differ, but there is a<br />

one-to-one correspondence.<br />

OS/2 An occurrence of an OS/2 database.<br />

There can be more than one RDB for<br />

each copy of <strong>Database</strong> Manager.<br />

relational database management system. A system<br />

that manages relational databases. In DataHub/2,<br />

relational database management systems are:<br />

• DB2<br />

• SQL/DS<br />

• OS/400 database<br />

• OS/2 <strong>Database</strong> Manager.<br />

remote data access. The ability to use data<br />

maintained by a remote system.<br />

remote data services. A <strong>Database</strong> Manager<br />

component that enables an application to access<br />

<strong>Database</strong> Manager and a database on a remote<br />

workstation. The application does not need to know<br />

the physical location of the database. Remote data<br />

services determines the database location and<br />

manages the transmission of the request to <strong>Database</strong><br />

Manager and the reply back to the application.<br />

remote system. An RDBMS other than the local<br />

RDBMS. Often used to describe the relationship<br />

between an application requester and an application<br />

server.<br />

Contrast with local system.<br />

Reorganize function. In DataHub/2, Reorganize is a<br />

function that reorganizes DB2 tables and indexes,<br />

SQL/DS indexes, and OS/400 and OS/2 <strong>Database</strong><br />

Manager tables to reclaim space and sort the data,<br />

providing more efficient access to data.<br />

To use the Reorganize function you can select an<br />

object from the client area and then select<br />

Reorganize from the Utilities pull-down menu, or you<br />

can use the DataHub/2 REORG command. This<br />

function is part of the DataHub/2 tools feature.<br />

requester. The source of a request to a server.<br />

requesting work. In DataHub/2, requesting work is a<br />

unit of work that has initiated a request to use the<br />

distributed database at another RDBMS (server).<br />

DataHub/2 displays requesting units of work under the<br />

heading ″Requesting work.″<br />

The definition of a requesting unit of work is different<br />

for each host environment and query that is<br />

requested at a host or RDB.<br />

• Display at RDB:<br />

276 DataHub Implementation and Connectivity<br />

MVS A unit of work is classified as<br />

requesting when an application is<br />

running at the selected RDB and<br />

accessing relational data on a<br />

different RDB.<br />

With DB2 it is possible that a unit of<br />

work at a single RDB can be<br />

classified as both serving and<br />

requesting. A CONNECT statement<br />

issued from another RDB results in<br />

the unit of work being classified as a<br />

serving unit of work. However, if the<br />

SQL statement contains a three-part<br />

table name that refers to another<br />

DB2, the unit of work will be classified<br />

as requesting.<br />

OS/400 A unit of work is classified as<br />

requesting when an application is<br />

running at the selected RDB and<br />

accessing relational data on a<br />

different RDB.<br />

VM Units of work at an SQL/DS RDB will<br />

never be classified as requesting.<br />

Units of work will always be classified<br />

as serving at an RDB. See also<br />

serving work for VM.<br />

• Display at host:<br />

MVS One or more RDBs can reside at an<br />

MVS host. DataHub/2 provides<br />

information for each RDB at the host.<br />

Therefore, a unit of work is classified<br />

as requesting when an application is<br />

running at the RDB being processed<br />

by DataHub/2 and accessing<br />

relational data on a different RDB.<br />

With DB2 it is possible that a unit of<br />

work at a single RDB can be<br />

classified as both serving and<br />

requesting. A CONNECT statement<br />

issued from another RDB results in<br />

the unit of work being classified as a<br />

serving unit of work. However, if the<br />

SQL statement contains a three-part<br />

table name that refers to another<br />

DB2, the unit of work will be classified<br />

as requesting.<br />

OS/400 One RDB can reside at an OS/400<br />

host. A unit of work is classified as<br />

requesting when an application is<br />

running at the RDB being processed<br />

by DataHub/2 and accessing<br />

relational data on a different RDB.<br />

VM A unit of work is classified as<br />

requesting when an application is<br />

running at the selected host and is<br />

accessing relational data at any RDB.<br />

Therefore, units of work at a VM host<br />

will always be classified as<br />

requesting.


eturn code. A value returned to a program to<br />

indicate the results of an operation requested by that<br />

program.<br />

Run function. In DataHub/2, Run is a function that<br />

can be used to perform actions on objects by running<br />

files containing JCL or SQL statements.<br />

To use the Run function you can select Run from the<br />

Actions pull-down menu, or you can use the<br />

DataHub/2 RUN command. This function is part of the<br />

DataHub/2 platform feature.<br />

S<br />

SAA. See <strong>Systems</strong> Application Architecture.<br />

scroll bar. A part of a window, associated with a<br />

scrollable area, that a user interacts with to see<br />

information that is not currently visible.<br />

secondary user. In DataHub <strong>Support</strong>, a second or<br />

alternative user ID associated with the RDBMS work<br />

unit selected for the display details request. Not<br />

applicable to OS/400.<br />

select. To explicitly identify one or more object types<br />

or occurrences to which a subsequent action will<br />

apply.<br />

selection field. A set of related choices. See also<br />

entry field.<br />

selection list. Choices that a user can scroll through<br />

to select one choice.<br />

sequence number. A number that identifies the<br />

physical location of a record in a data set.<br />

sequential data set. A data set whose records are<br />

organized on the basis of their successive physical<br />

positions, such as on magnetic tape. Several of the<br />

DB2 database utilities require sequential data sets.<br />

server. The target of a request from a requester.<br />

serving work. In DataHub/2, serving work is a unit of<br />

work that is running at a distributed database (server)<br />

as a result of a distributed request. DataHub/2<br />

displays serving units of work under the heading<br />

″Serving work.″<br />

The definition of a serving unit of work is different for<br />

each host environment and query that is requested at<br />

a host or RDB.<br />

• Display at RDB:<br />

MVS A unit of work is classified as serving<br />

when data is being accessed at the<br />

selected RDB and the application is<br />

running at a different RDB.<br />

With DB2 it is possible that a unit of<br />

work at a single RDB can be<br />

classified as both serving and<br />

requesting. A CONNECT statement<br />

issued from another RDB results in<br />

the unit of work being classified as a<br />

serving unit of work. However, if the<br />

SQL statement contains a three-part<br />

table name that refers to another<br />

DB2, the unit of work will be classified<br />

as requesting.<br />

OS/400 A unit of work is classified as serving<br />

when data is being accessed at the<br />

selected RDB and the application is<br />

running at a different RDB.<br />

VM A unit of work is classified as serving<br />

when data is being accessed at the<br />

selected RDB. Therefore, units of<br />

work at an SQL/DS RDB will always<br />

be classified as serving.<br />

• Display at host:<br />

MVS One or more RDBs can reside at an<br />

MVS host. DataHub/2 provides<br />

information for each RDB at the host.<br />

Therefore, a unit of work is classified<br />

as serving when data is being<br />

accessed at the RDB being processed<br />

by DataHub/2 and the application is<br />

running at a different RDB.<br />

With DB2 it is possible that a unit of<br />

work at a single RDB can be<br />

classified as both serving and<br />

requesting. A CONNECT statement<br />

issued from another RDB results in<br />

the unit of work being classified as a<br />

serving unit of work. However, if the<br />

SQL statement contains a three-part<br />

table name that refers to another<br />

DB2, the unit of work will be classified<br />

as requesting.<br />

OS/400 One RDB can reside at an OS/400<br />

host. Therefore, a unit of work is<br />

classified as serving when data is<br />

being accessed at the RDB being<br />

processed by DataHub/2 and the<br />

application is running at a different<br />

RDB.<br />

VM Units of work at a VM host are never<br />

classified as serving. Units of work<br />

are always classified as requesting at<br />

a host. See also requesting work for<br />

VM.<br />

severity code. A code that indicates the seriousness<br />

of an error condition.<br />

site. A specific RDBMS in a distributed relational<br />

database system. Synonymous with location.<br />

site autonomy. The condition in which different<br />

database sites can be administered independently.<br />

Single nodes in the distributed system can still<br />

Glossary 277


operate even when connections to other nodes have<br />

failed.<br />

SNA. See <strong>Systems</strong> Network Architecture.<br />

snapshot. A snapshot is a stored database object<br />

like a table, but for read-only data access. A<br />

snapshot is periodically refreshed from the base<br />

objects in its definition.<br />

SQL. See Structured Query Language.<br />

SQL/DS. See Structured Query Language/Data<br />

System.<br />

SQL executed. In DataHub/2, the number of SQL<br />

statements processed by the RDBMS work unit<br />

selected for the display details request.<br />

SQL statement. In DataHub/2, display SQL statement<br />

provides the statement in Structured Query Language<br />

(SQL) that is currently being executed for the local,<br />

requesting, and serving work units.<br />

SQL/400. See IBM Structured Query Language/400.<br />

state. In DataHub/2, indicates whether the RDBMS<br />

work unit selected for a display request is waiting for<br />

a lock (Wait), holding the lock (Held), or requesting<br />

the lock (Req).<br />

status. In DataHub/2, a character string describing<br />

the current status (for example, Communication wait,<br />

Not active, or I/O wait) of the RDBMS work unit (s)<br />

listed as a result of a Display RDBMS work request.<br />

stop on error. An error-handling option available for<br />

most functions in DataHub/2. When you specify this<br />

option, DataHub/2 will stop running an application<br />

when it encounters an error with a severity code<br />

greater than or equal to 8. Contrast with continue on<br />

error.<br />

storage group. The named set of host disk space<br />

(DASD volumes) on which the DB2 data can be<br />

stored. Storage groups are applicable to DB2<br />

environments only.<br />

storage pool. A specific set of available storage<br />

areas. The database administrator uses these areas<br />

to control storage of the database. A storage pool<br />

contains one or more dbspaces. Storage pools are<br />

applicable to SQL/DS environments only.<br />

Structured Query Language (SQL). A language that<br />

can be used within host programming languages, or<br />

interactively, to define and manipulate data and to<br />

control access to resources in a relational<br />

environment.<br />

Structured Query Language/400. IBM Structured<br />

Query Language/400. An IBM licensed program that<br />

provides a set of commands to access and<br />

manipulate data on an OS/400 system. Applications<br />

278 DataHub Implementation and Connectivity<br />

written using SQL/400 can be easily transported to<br />

other IBM systems.<br />

Structured Query Language/Data System. An IBM<br />

licensed program that is the SAA version of SQL for<br />

the VM and VSE environments.<br />

subsystem. A DB2 subsystem is defined to<br />

DataHub/2 when an MVS host or RDB is configured to<br />

DataHub/2. The Display status function identifies the<br />

subsystem associated with the RDBMS work unit<br />

selected for the display details request. Subsystems<br />

are applicable to MVS and OS/400 environments only.<br />

MVS Subsystem corresponds to the DB2<br />

connection identifier.<br />

OS/400 Subsystem is the name of the<br />

subsystem in which the OS/400 job<br />

for the RDBMS work unit is running.<br />

syntax diagram. A diagram that uses graphic<br />

notation to describe the structure of a command.<br />

SYSID. See system-specific identifier.<br />

SYSID (MVS). The system-specific identifier (SYSID)<br />

for an MVS system is a 39-character string made up<br />

of the agent control element (ACE) and the logical<br />

unit of work identifier (LUWID). It is of the form:<br />

ACE.NETNAME.LUNAME.LUW_instance<br />

Where:<br />

ACE The agent control element<br />

(ACE) address is the<br />

internal representation for<br />

an active user (thread).<br />

An ACE is unique for a<br />

given thread for the life of<br />

the thread. It is an<br />

8-hexadecimal-character<br />

representation of the host<br />

ACE address.<br />

. (period) Is a one-character<br />

delimiter.<br />

NETNAME Is a network identifier<br />

which is from 1 to 8<br />

characters long.<br />

LUNAME Is a logical unit within an<br />

SNA network which is from<br />

1 to 8 characters long.<br />

LUW_instance Is a unique identifier for a<br />

unit of work which is a<br />

12-hexadecimal-character<br />

string.<br />

An example of an MVS SYSID is:<br />

1234ABCD.SYDNEY.LUNAME12.FFEDC3456788<br />

SYSID (OS/400). The system-specific identifier<br />

(SYSID) for an OS/400 system is a fully qualified<br />

OS/400 job name. It is of the form:


jobnumber/userid/job name<br />

Where:<br />

jobnumber The system assigned job<br />

number. This is 6 characters<br />

long from the set 0-9.<br />

/ Is a one-character separator<br />

between the parts of the job<br />

name.<br />

userid Is a user identification that is 1<br />

to 10 characters long.<br />

job name Is a job name that is from 1 to<br />

10 characters long. The first<br />

character must be alphabetic.<br />

An example of an OS/400 SYSID is:<br />

533420/STEVEY/DISTDBDB14<br />

SYSID (VM). The system-specific identifier (SYSID)<br />

for a VM system is a special identifier made up of the<br />

VM user ID and a time stamp. It is of the form:<br />

userid.timestamp<br />

Where:<br />

userid Is a user identification, from 1 to<br />

8 characters long. If delimited<br />

with double quotes, it can be up<br />

to 10 characters long.<br />

. (period) Is a one-character delimiter.<br />

timestamp Is the character representation<br />

of the host time-of-day (TOD)<br />

clock value, which is exactly 16<br />

characters from the set 0-9 and<br />

A-F.<br />

An example of a VM SYSID is:<br />

JANETF.AFEDAEEDEACDE000<br />

<strong>Systems</strong> Application Architecture (SAA). A set of<br />

IBM software interfaces, conventions, and protocols<br />

that provide a framework for designing and<br />

developing applications that are consistent across<br />

systems.<br />

<strong>Systems</strong> Network Architecture (SNA). The<br />

description of the logical structure, formats, protocols,<br />

and operational sequences for transmitting<br />

information units through the networks as well as the<br />

operational sequences for controlling the<br />

configuration and operation of networks.<br />

system-specific identifier. A string of alphanumeric<br />

characters used to identify a work item. The SYSID<br />

has different characters used to uniquely identify a<br />

unit of work at an RDBMS. See also SYSID (MVS),<br />

SYSID (OS/2), SYSID (OS/400), and SYSID (VM).<br />

T<br />

table. A named data object consisting of a specific<br />

number of named vertical rows and some number of<br />

unordered horizontal rows. A fully qualified table<br />

name is of the form rdb.collection.object.<br />

table column. A table column is a vertical component<br />

of a table that consists of a name and a particular<br />

data type. All values in a particular column share this<br />

same data type.<br />

table space. In DB2, a page set that is used to store<br />

the records of one or more DB2 tables. Table spaces<br />

are applicable to DB2 environments only.<br />

three-part name. The fully qualified name of an SQL<br />

object such as a table or view. A three-part name<br />

usually consists of an RDB name, a collection ID, and<br />

an object name.<br />

time in state. In DataHub/2, the amount of time that<br />

the RDBMS work unit selected for a display request<br />

has been in the current state. See also state.<br />

The time is in the format specified for the OS/2<br />

system on which DataHub/2 is running. The time<br />

format has three decimal places for milliseconds<br />

separating the time by the decimal separator<br />

specified for the OS/2 system.<br />

For example, if the OS/2 system uses the JIS time<br />

format with a comma decimal separator, the time will<br />

appear as:<br />

1 day 01:13:14,138<br />

If the time is less than a day, it will appear as:<br />

12:54:32,002<br />

If the time is greater than 30 days, it will appear as:<br />

>30 days 12:54:32,002<br />

tool. One or more application programs that perform<br />

DataHub user function such as copying database<br />

objects or querying the status of a logical unit of<br />

work.<br />

tools conversation flows. Communications, carried<br />

out using services provided by the DataHub<br />

platforms, between the host and workstation<br />

components of a DataHub tool.<br />

trace controls. Parameters that determine whether<br />

tracing will occur, which processing will be traced,<br />

and where tracing output is to be written.<br />

trace record. Trace output generated by DataHub/2.<br />

Each trace record consists of header and detail<br />

sections.<br />

transparency. The ability to access remote data as<br />

though it were local.<br />

Glossary 279


U<br />

UNC. See universal naming convention.<br />

unit of recovery. A logical unit of work that<br />

represents a recoverable sequence of operations<br />

within a single resource manager, such as an instance<br />

of DB2. The initiation and termination of a unit of<br />

recovery define the points of consistency within a job.<br />

See also unit of work and RDBMS work.<br />

unit of work. See RDBMS work.<br />

universal naming convention (UNC). A name used to<br />

identify the server and netname of a resource, taking<br />

the form: servernamenetnamepath and filename,<br />

or servernamenetnamedevicename. See also<br />

netname.<br />

unlike. Refers to two or more different RDBMSs, for<br />

example, SQL/DS and DB2. Contrast with like.<br />

unlike environments. A network of dissimilar<br />

RDBMSs that provide access to data residing at any<br />

of the locations containing a participating<br />

management system (for example, an RDBMS on an<br />

MVS system and an RDBMS on a VM system<br />

accessing data at each other′s sites). Contrast with<br />

like environments.<br />

Unload function. In DataHub/2, Unload is a function<br />

that takes data from an existing relational table, or<br />

view (OS/2), and stores it at a sequential file at the<br />

RDBMS where the table, or view (OS/2), exists.<br />

To use the Unload function you can select an object<br />

from the client area and then select Unload from the<br />

Utilities pull-down menu, or you can use the<br />

DataHub/2 UNLOAD command. This function is part<br />

of the DataHub/2 tools feature.<br />

Update Statistics function. In DataHub/2, Update<br />

Statistics is a function that updates the RDBMS<br />

optimizer catalog statistics on DB2, SQL/DS, and OS/2<br />

<strong>Database</strong> Manager systems. The optimizer uses<br />

these statistics to determine the best access paths to<br />

data.<br />

To use the Update Statistics function you can select<br />

an object from the client area and then select Update<br />

statistics from the Utilities pull-down menu, or you<br />

can use the DataHub/2 UPDATE STATS command.<br />

This function is part of the DataHub/2 tools feature.<br />

user. In DataHub/2, the identification of the user who<br />

is running the rdbmswork listed as a result of a<br />

display RDBMSwork request.<br />

MVS The user is the primary authorization ID.<br />

OS/400 The user is the OS/400 user ID.<br />

VM For a requester, the user is the VM user<br />

ID. For a server, the user is the SQL ID.<br />

280 DataHub Implementation and Connectivity<br />

user authorizations. In DataHub/2, the set of all<br />

authorizations on objects for a user. User<br />

authorizations can be copied from one user to<br />

another. In DataHub/2 user authorizations are<br />

displayed in the client area as authorization<br />

occurrences for objects. Contrast with object<br />

authorizations.<br />

user exit. A point in an IBM-supplied program where<br />

a user-written routine can be given control.<br />

user group. The name of a grouping of related user<br />

IDs. User groups are applicable to OS/400<br />

environments only.<br />

user ID. A string of characters that identifies a user<br />

to a system (RDBMS or host). Most of the RDBMSs<br />

refer to this as the authid. See also authorization ID<br />

V<br />

variable. A part of a DataHub/2 command parameter<br />

that you supply and is shown in lowercase letters in<br />

the syntax diagram.<br />

view. An alternative representation of data from one<br />

or more tables. A view can include all or some of the<br />

columns contained in the tables or the views on which<br />

the table is defined.<br />

view column. A vertical component of a view that<br />

consists of a name and a particular data type. All<br />

values in a particular column share this same data<br />

type.<br />

virtual machine. A functional simulation of a<br />

computer and its associated devices. VM manages<br />

the computer′s resources in such a way that all<br />

workstation users have their own virtual machine. All<br />

users can work at their virtual machines as though<br />

each is the only person using the real computer.<br />

Virtual Telecommunications Access Method. An IBM<br />

licensed program that controls communication and<br />

the flow of data in a computer network. It provides<br />

single-domain, multiple-domain, and multiple network<br />

capability. VTAM runs under MVS, OS/VS1, VM, and<br />

VSE.<br />

VM. See virtual machine.<br />

VTAM. See Virtual Telecommunications Access<br />

Method.<br />

W<br />

where clause. Used for subselecting rows from a<br />

table or view. In IBM DataHub, the where clause<br />

must conform to the syntax of the query language<br />

subselect where clause for the RDBMS being<br />

selected.


wide area network. A network of devices that are<br />

connected to one another for communication over a<br />

larger distance. See also local area network.<br />

wildcard character. A character match string used to<br />

substitute for unknown or unspecified characters or<br />

words. In DataHub/2, the percent symbol (%) is used<br />

as the wildcard character, and a trailing % is allowed<br />

for the qualifier.<br />

window. An area of the screen with visible<br />

boundaries within which information is displayed. A<br />

window can be smaller than, or the same size as, the<br />

screen. Windows can appear to overlap on the<br />

screen. See also DataHub/2 window.<br />

Worksheet File (WSF) file format. An OS/2 file format<br />

used to load and unload data in worksheet formats<br />

supported by LOTUS products (1-2-3 and Symphony).<br />

<strong>Database</strong> Manager supports a subset of the<br />

worksheet records that is the same for all the LOTUS<br />

products. Each WSF file represents one worksheet.<br />

Glossary 281


282 DataHub Implementation and Connectivity


List of Abbreviations<br />

APPC Advanced<br />

program-to-program<br />

communications<br />

APPN Advanced<br />

program-to-program<br />

networking<br />

API Application programming<br />

interface<br />

AR Application requester<br />

AS/400 Application System/400<br />

AS Application server<br />

BSDS Bootstrap data set<br />

CCSID Coded character set identifier<br />

CD Copy Data<br />

CDB Communication database<br />

CICS Customer Information Control<br />

System<br />

CL Control language<br />

CLI Command-line interface<br />

CM Communications Manager<br />

CM/2 Communications Manager/2<br />

CPI Common Programming<br />

Interface<br />

CUA Common User Access<br />

DASD Direct access storage device<br />

DBA <strong>Database</strong> administrator<br />

DBCS Double-byte character set<br />

DBMS <strong>Database</strong> management<br />

system<br />

DB2 DATABASE 2<br />

DB2/2 DATABASE 2/2<br />

DBM IBM Extended Services for<br />

OS/2 <strong>Database</strong> Manager<br />

DCL Data control language<br />

DDCS/2 Distributed <strong>Database</strong><br />

Connection Services/2<br />

DDL Data definition language<br />

DDF Distributed data facility<br />

DH DataHub<br />

DH/2 DataHub/2<br />

DHS DataHub <strong>Support</strong><br />

DHS/2 DataHub <strong>Support</strong>/2<br />

DHS/400 DataHub <strong>Support</strong>/400<br />

DHS/MVS DataHub <strong>Support</strong>/MVS<br />

DHS/VM DataHub <strong>Support</strong>/VM<br />

DML Data manipulation language<br />

DRDA Distributed Relational<br />

<strong>Database</strong> Architecture<br />

DRDB Distributed relational<br />

database<br />

DS Display status<br />

DUW Distributed unit of work<br />

GUI Graphical user interface<br />

IBM <strong>International</strong> Business<br />

Machines Corporation<br />

ISQL Interactive Structured Query<br />

Language<br />

ITSC <strong>International</strong> <strong>Technical</strong><br />

<strong>Support</strong> Center<br />

JCL Job control language<br />

JES Job entry system<br />

LAN Local area network<br />

LAPS LAN adapter and protocol<br />

support<br />

LU Logical unit<br />

LU6.2 Logical unit 6.2<br />

MA Manage authorizations<br />

NAU Network addressable unit<br />

NCP Network Control Program<br />

NETBIOS Network Basic Input/Output<br />

System<br />

OEM Original equipment<br />

manufacturer<br />

OS/2 Operating System/2<br />

OS/2 ES IBM Extended Services for<br />

OS/2<br />

OS/400 Operating System/400<br />

PIU Path information unit<br />

PU Physical unit<br />

PWS Programmable workstation<br />

RACF Resource Access Control<br />

Facility<br />

RDB Relational database<br />

RDBMS Relational database<br />

management system<br />

RDS Remote Data Services<br />

RH Request/response header<br />

© Copyright IBM Corp. 1993 283


RU Request unit<br />

RUW Remote unit of work<br />

SAA <strong>Systems</strong> Application<br />

Architecture<br />

SAP Service access point<br />

SNA <strong>Systems</strong> Network<br />

Architecture<br />

SPUFI SQL Processor Using File<br />

Input<br />

SQL Structured Query Language<br />

284 DataHub Implementation and Connectivity<br />

SQL/400 Structured Query<br />

Language/400<br />

SQL/DS Structured Query<br />

Language/Data System<br />

TC Tools conversation<br />

TH Transmission header<br />

UPM User profile management<br />

VM Virtual Machine<br />

VTAM Virtual Telecommunications<br />

Access Method


Index<br />

Numerics<br />

802.2 130<br />

802.2 resources 130<br />

link stations 131<br />

SAP 130<br />

users 130<br />

A<br />

ACF/NCP<br />

See network control program<br />

activating DataHub <strong>Support</strong>/VM gateways 187<br />

adding a new DRDA host 102<br />

administrator′s folder 148<br />

alert file 181<br />

alias 107<br />

APPC/VM 53<br />

APPC/VTAM 53<br />

application server 109<br />

architecture 3<br />

platform feature 4<br />

tools feature 4<br />

AS/400 network definition 191<br />

AS/400 RDBMS considerations 192<br />

AVS 53<br />

AVS console 53, 177<br />

AVS machine 56<br />

AVS profile 56<br />

AVS profile definition 183<br />

B<br />

bind 32<br />

binding DataHub <strong>Support</strong>/400 to other hosts 192<br />

C<br />

CDRM<br />

See cross-domain resource manager<br />

CDRSC<br />

See cross-domain resource<br />

CMR 177<br />

codepage support 148<br />

communication directory 58<br />

communication directory definitions 184<br />

communications link 162<br />

component relationship 70, 155, 193<br />

control point name 34, 45, 66, 77<br />

control program 53<br />

Copy Data 12<br />

CP<br />

See control program<br />

cross-domain resource 32, 41<br />

cross-domain resource manager 32, 40<br />

customer business requirements 19<br />

D<br />

data flows 6<br />

DRDA flows 6<br />

RDS flows 7<br />

TC flows 7<br />

database name 54<br />

DataHub<br />

DataHub tools conversation connectivity 151<br />

DataHub/2 workstation 105<br />

DRDA connectivity 31<br />

introduction 1<br />

planning 19<br />

problem determination 203<br />

security 241<br />

DataHub key connectivity parameters<br />

DB2 subsystem name 154<br />

host RDB name 153<br />

mode name 153<br />

OS/2 <strong>Database</strong> Manager database name 153<br />

partner logical unit name 153<br />

partner transaction program name 153<br />

symbolic destination name 152<br />

DataHub <strong>Support</strong>/2<br />

configuration file 195<br />

configuring OS/2 for 193<br />

customization 194<br />

OS/2 Communications Manager definitions 196<br />

OS/2 <strong>Database</strong> Manager definitions 201<br />

OS/2 host components 193<br />

problem determination 239<br />

RDB name map file 113, 195<br />

security 255<br />

DataHub <strong>Support</strong>/400<br />

configuring AS/400 for 191<br />

AS/400 network definition 191<br />

configuration considerations 192<br />

prestart job 192<br />

problem determination 232<br />

security 254<br />

DataHub <strong>Support</strong>/MVS<br />

configuring MVS for 167<br />

connectivity requirements 172<br />

environment configuration 168<br />

installation preparation 167<br />

installing the platform and tools feature 168<br />

starting DataHub <strong>Support</strong>/MVS 171<br />

problem determination 210<br />

security 248<br />

DataHub <strong>Support</strong>/VM<br />

configuring VM for 175<br />

environment configuration 178<br />

operating system 177<br />

© Copyright IBM Corp. 1993 285


DataHub <strong>Support</strong>/VM (continued)<br />

configuring VM for (continued)<br />

performance considerations 190<br />

service pool support 175<br />

system operation 187<br />

problem determination 226<br />

security 251<br />

DataHub tools conversation connectivity 151, 173<br />

configuring AS/400 for DataHub <strong>Support</strong>/400 191<br />

configuring MVS for DataHub <strong>Support</strong>/MVS 167<br />

configuring OS/2 for DataHub <strong>Support</strong>/2 193<br />

configuring the DataHub/2 workstation 155<br />

configuring VM for DataHub <strong>Support</strong>/VM 175<br />

ITSC network example 154<br />

key connectivity parameters 152<br />

recommendations 202<br />

DataHub/2 requester 120<br />

DataHub/2 server 119, 131<br />

DataHub/2 workstation 105, 128, 155<br />

codepage support 148<br />

configuration 106, 155<br />

component relationship 155<br />

DataHub/2 database definitions 156<br />

OS/2 Communications Manager definitions 161<br />

OS/2 <strong>Database</strong> Manager definitions 166<br />

connect request 108<br />

database definitions 156<br />

database placement 119<br />

DRDA connectivity 69<br />

installation 111, 115, 119, 121<br />

LAN design considerations 144<br />

performance 145<br />

prerequisite products 105<br />

problem determination 204<br />

recommendations 148<br />

scenario 1: local management of local data 110<br />

scenario 2: central management of distributed<br />

data 114<br />

scenario 3: distributed management of distributed<br />

data 117<br />

security 241<br />

tools conversation connectivity 127, 143, 144, 155<br />

DB2 subsystem name 154<br />

DDCS/2<br />

See Distributed <strong>Database</strong> Connection Services/2<br />

Display Status 11<br />

Distributed <strong>Database</strong> Connection Services/2 70, 108<br />

domain 32<br />

DRDA components in SQL/DS 52<br />

APPC/VM 53<br />

APPC/VTAM 53<br />

AVS 53<br />

AVS console 53<br />

AVS machine 56<br />

AVS profile 56<br />

communication directory 58<br />

control program 53<br />

database name 54<br />

286 DataHub Implementation and Connectivity<br />

DRDA components in SQL/DS (continued)<br />

group control system 53<br />

machine-id 55<br />

resource-id 54<br />

SQL/DS application requester 53<br />

SQL/DS database machine 54, 55<br />

DRDA connectivity 31, 172<br />

adding a new DRDA host 102<br />

AS/400 definitions 60<br />

ITSC network example 38<br />

key connectivity parameters 33<br />

MVS definitions 40<br />

OS/2 definitions 69<br />

recommendations 104<br />

SNA environment 31<br />

VM definitions 51<br />

DRDA flows 6<br />

DRDA key connectivity parameters<br />

cross-reference 37<br />

DRDA parameters 35<br />

SNA parameters 33<br />

DRDA parameters<br />

partner RDB name 36, 48, 56, 67<br />

RDB name 36, 47, 56, 67, 87<br />

transaction program name 36, 58, 67<br />

transaction program prefix 36<br />

F files<br />

placement 119<br />

relationship with EXECs 180<br />

functions<br />

Copy Data 12<br />

Display Status 11<br />

Manage Authorizations 10<br />

Utilities 8<br />

G<br />

gateway LU 177<br />

GCS<br />

See group control system<br />

group control system 53<br />

H<br />

host RDB name 153<br />

I<br />

IBMLAN.INI 141<br />

IDBLK 34, 63, 77<br />

IDNUM 34, 63, 77<br />

implementation checklist 26<br />

Information Warehouse framework 2<br />

introduction to DataHub 1<br />

architecture 3<br />

data flows 6, 17


ITSC network example 38, 154<br />

K<br />

key connectivity parameters<br />

DataHub 152<br />

DRDA 33<br />

L<br />

LAN adapter and protocol support 69, 109, 127<br />

LAN design considerations 144<br />

LAN server compatibility 144<br />

placement of LAN server 145<br />

LAN requester 128<br />

LAN requester and server reserved resources 132<br />

LAN resource configuration 130<br />

802.2 resources 130<br />

NETBIOS resources 131<br />

LAN server access profile 120<br />

LAN server administration<br />

LAPS<br />

119<br />

See LAN adapter and protocol support<br />

LAPS configuration samples 132<br />

link stations 131<br />

local node characteristics 162<br />

location name 47<br />

logical unit 31<br />

logical unit name<br />

LOGMODE<br />

See logon mode<br />

34, 45, 51, 64, 77<br />

logon mode 32<br />

logon mode table<br />

LU<br />

See logical unit<br />

33<br />

M<br />

machine-id 55<br />

Manage Authorizations 10<br />

managing directories 98<br />

map file 113, 195<br />

MAXDATA/MAXBFRU 34, 42<br />

mode name 35, 46, 66, 85, 153, 166<br />

MODETAB<br />

See logon mode table<br />

N<br />

names 131<br />

NAU<br />

See network addressable unit<br />

NCB<br />

See network control blocks<br />

NETBIOS 130<br />

NETBIOS resources 131<br />

names 131<br />

network control blocks 131<br />

sessions 131<br />

network addressable unit 32<br />

network control blocks 131<br />

network control program 31<br />

network name 34, 45, 66, 77<br />

nodes 32<br />

O<br />

OS/2 Communications Manager 69, 109<br />

definitions 161, 196<br />

OS/2 configuration file 195<br />

OS/2 <strong>Database</strong> Manager 70, 108, 120<br />

database alias name 153<br />

definitions 166, 201<br />

reserved resources 131<br />

OS/2 host components 69, 193<br />

Distributed <strong>Database</strong> Connection Services/2 70<br />

LAN adapter and protocol support 69<br />

OS/2 Communications Manager 69<br />

OS/2 <strong>Database</strong> Manager 70<br />

relationship between OS/2 components 70<br />

P pacing 35, 47, 66, 85<br />

partner logical unit name 35, 46, 65, 81, 153, 162<br />

partner RDB name 36, 48, 56, 67<br />

partner token-ring address 34, 44, 64, 81<br />

partner transaction program name 153<br />

password changes 256<br />

path information unit 33<br />

performance 145, 190<br />

placement 145<br />

workload 146<br />

physical unit 31<br />

PIU<br />

See path information unit<br />

placement 145<br />

planning for DataHub 19<br />

customer business requirements 19<br />

factors affecting implementation 26<br />

implementation checklist 26<br />

recommendations 30<br />

sample scenarios 20<br />

platform feature 4<br />

platform feature function<br />

Run 7<br />

problem determination 203<br />

AS/400 host 232<br />

DataHub/2 workstation 204<br />

implementation strategy 203<br />

MVS host 210<br />

OS/2 host 239<br />

VM host 226<br />

PROTOCOL.INI 127, 139<br />

PU<br />

See physical unit<br />

Index 287


Q QEMQUSER collection 192<br />

R<br />

RDB name 36, 47, 56, 67, 87<br />

RDS 129<br />

RDS flows 7<br />

RDS parameters 36<br />

workstation name 36, 87<br />

recommendations 104, 202<br />

request unit 33<br />

resource-id 54<br />

RU<br />

See request unit<br />

RU size 35, 47, 66, 85<br />

Run 7<br />

S sample scenarios 20<br />

scenario 1: local management of local data 20<br />

scenario 2: central management of distributed<br />

data 22<br />

scenario 3: distributed management of distributed<br />

data 24<br />

SAP<br />

See service access point<br />

scenario 1: local management of local data 110<br />

installation procedure and configuration 111<br />

process flow 111<br />

required products 110<br />

scenario 2: central management of distributed<br />

data 114<br />

installation procedure and configuration 115<br />

process flow 115<br />

required products 114<br />

scenario 3: distributed management of distributed<br />

data 117<br />

DataHub/2 database connectivity parameters 144<br />

DataHub/2 execution: data requirements 118<br />

DataHub/2 workstation installation as a<br />

requester 121<br />

installation environment preparation 119<br />

LAN connectivity parameters 127<br />

OS/2 <strong>Database</strong> Manager connectivity<br />

parameters 143<br />

required products 118<br />

security 241<br />

AS/400 254<br />

DataHub/2 workstation 241<br />

MVS 248<br />

OS/2 managed host 255<br />

password changes 256<br />

VM 251<br />

service access point 128, 130<br />

service pool 175, 176<br />

configuration file 185<br />

288 DataHub Implementation and Connectivity<br />

service pool (continued)<br />

machine 186<br />

session 32, 131<br />

SNA environment 31<br />

bind 32<br />

cross-domain resource 32<br />

cross-domain resource manager 32<br />

domain 32<br />

logical unit 31<br />

logon mode 32<br />

logon mode table 33<br />

network addressable unit 32<br />

network control program 31<br />

nodes 32<br />

path information unit 33<br />

physical unit 31<br />

request unit 33<br />

session 32<br />

subarea 32<br />

system services control point 31<br />

SNA parameters<br />

control point name 34, 45, 66, 77<br />

cross-domain resource 41<br />

cross-domain resource manager 40<br />

IDBLK 34, 63, 77<br />

IDNUM 34, 63, 77<br />

logical unit name 34, 45, 51, 64, 77<br />

MAXDATA/MAXBFRU 34, 42<br />

mode name 35, 46, 66, 85<br />

network name 34, 45, 66, 77<br />

pacing 35, 47, 66, 85<br />

partner logical unit name 35, 46, 65, 81<br />

partner token-ring address 34, 44, 64, 81<br />

RU size 35, 47, 66, 85<br />

token-ring address 33, 41, 64, 75<br />

SQL/DS application requester 53<br />

SQL/DS database machine 54, 55<br />

SSCP<br />

See system services control point<br />

subarea 32<br />

symbolic destination name 152, 163<br />

system configuration file 185<br />

system database directory 108<br />

system services control point 31<br />

SystemView strategy 1<br />

T<br />

task handler 178<br />

TC flows<br />

See tools conversation flows<br />

testing<br />

DRDA and RDS connections for OS/2 98<br />

DRDA connections for AS/400 68<br />

DRDA connections for MVS 51<br />

DRDA connections for VM 59<br />

token-ring address 33, 41, 64, 75<br />

tool table example 186


tools conversation flows 7<br />

tools conversation key connectivity parameters 152<br />

tools feature 4<br />

TPN<br />

See transaction program name<br />

transaction program name 36, 58, 67, 196<br />

transaction program prefix 36<br />

U<br />

UPM<br />

See user profile management<br />

user profile management 107<br />

users 130<br />

Utilities 8<br />

Backup 9<br />

Load 8<br />

Recover 9<br />

Reorg 9<br />

Unload 8<br />

Update Statistics 10<br />

W workload 146<br />

workstation directory 108<br />

workstation name 36, 87<br />

Index 289


ITSO Redbook Evaluation<br />

<strong>International</strong> <strong>Technical</strong> <strong>Support</strong> <strong>Organization</strong><br />

<strong>Database</strong> <strong>Systems</strong> Management:<br />

IBM SystemView Information Warehouse<br />

DataHub Implementation and Connectivity<br />

June 1993<br />

Publication No. GG24-4031-00<br />

Your feedback is very important to help us maintain the quality of ITSO redbooks. Please fill out this<br />

questionnaire and return it using one of the following methods:<br />

• Mail it to the address on the back (postage paid in U.S. only)<br />

• Give it to an IBM marketing representative for mailing<br />

• Fax it to: Your <strong>International</strong> Access Code + 1 914 432 8246<br />

• Send a note to REDBOOK@VNET.IBM.COM<br />

Please rate on a scale of 1 to 5 the subjects below.<br />

(1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)<br />

Overall Satisfaction ____<br />

<strong>Organization</strong> of the book<br />

Accuracy of the information<br />

Relevance of the information<br />

Completeness of the information<br />

Value of illustrations<br />

____<br />

____<br />

____<br />

____<br />

____<br />

Grammar/punctuation/spelling<br />

Ease of reading and understanding<br />

Ease of finding information<br />

Level of technical detail<br />

Print quality<br />

____<br />

____<br />

____<br />

____<br />

____<br />

Please answer the following questions:<br />

a) Are you an employee of IBM or its subsidiaries: Yes____ No____<br />

b) Do you work in the USA? Yes____ No____<br />

c) Was this redbook published in time for your needs? Yes____ No____<br />

d) Did this redbook meet your needs?<br />

If no, please explain:<br />

Yes____ No____<br />

What other topics would you like to see in this redbook?<br />

What other redbooks would you like to see published?<br />

Comments/Suggestions: ( THANK YOU FOR YOUR FEEDBACK! )<br />

Name Address<br />

Company or <strong>Organization</strong><br />

Phone No.


ITSO Redbook Evaluation<br />

GG24-4031-00 IBM ®<br />

Fold and Tape Please do not staple Fold and Tape<br />

BUSINESS REPLY MAIL<br />

FIRST-CLASS MAIL PERMIT NO. 40 ARMONK, NEW YORK<br />

POSTAGE WILL BE PAID BY ADDRESSEE<br />

IBM <strong>International</strong> <strong>Technical</strong> <strong>Support</strong> <strong>Organization</strong><br />

Department 471/E2<br />

650 Harry Road<br />

San Jose, CA<br />

USA 95120-6099<br />

NO POSTAGE<br />

NECESSARY<br />

IF MAILED IN THE<br />

UNITED STATES<br />

Fold and Tape Please do not staple Fold and Tape<br />

GG24-4031-00<br />

Cut or Fold<br />

Along Line<br />

Cut or Fold<br />

Along Line


IBM ®<br />

Printed in U.S.A.<br />

GG24-4031-00

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!