02.06.2015 Views

H2R User's Guide - Htwc.it

H2R User's Guide - Htwc.it

H2R User's Guide - Htwc.it

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong>


Table Of Contents<br />

Scope of the product ........................................................................................................1<br />

Supported platforms .....................................................................................................1<br />

General concepts .............................................................................................................3<br />

Aim of the product.........................................................................................................3<br />

Unchanged user interface.............................................................................................3<br />

Performance .................................................................................................................3<br />

Util<strong>it</strong>ies ..........................................................................................................................4<br />

Integration.....................................................................................................................4<br />

Automatic migration......................................................................................................4<br />

New structures automatic defin<strong>it</strong>ion and creation .........................................................5<br />

Multiplatform usage ......................................................................................................5<br />

Forced modular<strong>it</strong>y.........................................................................................................5<br />

Preliminary elaboration area ............................................................................................7<br />

Internal tables ...............................................................................................................7<br />

Copies analyzing ..........................................................................................................7<br />

DBD/PSB analyzing......................................................................................................7<br />

IO programs generation................................................................................................7<br />

User tables generation..................................................................................................8<br />

Conversion rules and load programs generation ..........................................................8<br />

Data Unload Util<strong>it</strong>y and data elaboration ......................................................................8<br />

User programs elaboration ...........................................................................................8<br />

IMS format compiling....................................................................................................8<br />

Runtime execution area ...................................................................................................9<br />

Kernel ...........................................................................................................................9<br />

IO programs..................................................................................................................9<br />

<strong>H2R</strong> flow design .............................................................................................................10<br />

IMS flow design..............................................................................................................11<br />

Prerequis<strong>it</strong>es ..................................................................................................................13<br />

Installation ......................................................................................................................13<br />

Environment creation .....................................................................................................15<br />

iii


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

Prerequis<strong>it</strong>es...............................................................................................................15<br />

UNIX User creation.....................................................................................................15<br />

Environment defin<strong>it</strong>ion ................................................................................................16<br />

Internal h2r tables creation .........................................................................................17<br />

Internal h2r tables loading ..............................................................................................19<br />

DBD and PSB analyzing.............................................................................................19<br />

Copies analyzing ........................................................................................................19<br />

Components generations ...............................................................................................25<br />

IO programs generation..............................................................................................25<br />

Users tables scripts generation ..................................................................................25<br />

Conversion program generation .................................................................................25<br />

Conversion rules defin<strong>it</strong>ion .........................................................................................26<br />

Load programs generation..........................................................................................27<br />

User program elaboration ...........................................................................................27<br />

IMS formats compiling ................................................................................................27<br />

Data management..........................................................................................................29<br />

Data unloading ...........................................................................................................29<br />

Data conversion..........................................................................................................29<br />

User tables creation and data loading ........................................................................29<br />

User guide......................................................................................................................31<br />

Flow ............................................................................................................................31<br />

DL/1 IMS/DB configuration .........................................................................................31<br />

DL1 BATCH runtime sw<strong>it</strong>ches ....................................................................................32<br />

<strong>H2R</strong> util<strong>it</strong>ies ....................................................................................................................35<br />

UNIX scripts................................................................................................................35<br />

SQL scripts .................................................................................................................36<br />

Ant util<strong>it</strong>ies ..................................................................................................................36<br />

Miscellanea.................................................................................................................36<br />

Internal H2r tables description........................................................................................38<br />

DBD_SEGM ...............................................................................................................38<br />

DBD_FIELD................................................................................................................38<br />

iv


Table Of Contents<br />

DBD_XDFLD ..............................................................................................................38<br />

DBD_LIST ..................................................................................................................39<br />

DBD_COPY................................................................................................................39<br />

PSB_PCB ...................................................................................................................39<br />

PSB_SENSEG ...........................................................................................................40<br />

COPY_FIELDS...........................................................................................................40<br />

Logical DBD elaboration: example .................................................................................41<br />

Implosion ....................................................................................................................51<br />

Explosion ....................................................................................................................51<br />

How to deal w<strong>it</strong>h a new PSB? ........................................................................................53<br />

How to deal w<strong>it</strong>h a new physical DBD? ..........................................................................54<br />

How to deal w<strong>it</strong>h a new search field, a new segment, a new secondary index? ............55<br />

v


Introduction<br />

Scope of the product<br />

The <strong>H2R</strong> product perm<strong>it</strong>s:<br />

<br />

<br />

<br />

the data migration from a hierarchic structure to a relational one, this may perm<strong>it</strong> the<br />

subst<strong>it</strong>ution of the mainframe w<strong>it</strong>h a UNIX systems<br />

development of the new applications w<strong>it</strong>h the standard mainframe tools (such as<br />

CICS, IMS, etc)<br />

development of the applications on low-cost systems in order to port them in the<br />

exercise on mainframe<br />

Supported platforms<br />

HP-UX<br />

HP-UX 11.0<br />

HP-UX 11i<br />

HP-UX 11i v2 (11.23)<br />

PA-RISC Itanium2<br />

AIX<br />

AIX 4<br />

AIX 5.x<br />

PowerPC<br />

Solaris<br />

Solaris 7<br />

Solaris 8<br />

Solaris 9<br />

Sparc<br />

Linux<br />

SuSE<br />

RedHat<br />

Intel x86<br />

1


Product overview<br />

General concepts<br />

Aim of the product<br />

The aim of the <strong>H2R</strong> product is to provide the interface w<strong>it</strong>h a relational Data Base to the<br />

users of the DB DL1/IMS of IBM, maintaining the data w<strong>it</strong>h the new organization acceding<br />

them however in the trad<strong>it</strong>ional manner.<br />

The most important points are:<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

Same preexistent user interfaces to access data in transactional or batch mode.<br />

Performance at least the same if not better than in the previous s<strong>it</strong>uation<br />

Surrounding util<strong>it</strong>ies.<br />

Possibil<strong>it</strong>y to integrate the new Data Base w<strong>it</strong>h new applications.<br />

Automatic data migration and conversion.<br />

Automatic defin<strong>it</strong>ion and creation of the new structures.<br />

Multiplatform usage.<br />

High modular<strong>it</strong>y that allows easy personalization in the case of particular<br />

requirements and/or incongruence<br />

Unchanged user interface<br />

In order to obtain this functional<strong>it</strong>y, two specific modules w<strong>it</strong>h two specific functions are<br />

disposed:<br />

The first module w<strong>it</strong>h the standard Entry Point names (CBLTDLI, PLITDLI) is used for the<br />

case of CALL level interface. It analyses the parameters passed to the called program<br />

and normalizes them for the KERNEL of <strong>H2R</strong>.<br />

The second module, for the High Level Program Interface , executes the static<br />

transformation of the EXEC DLI commands into a CALL type syntaxes.<br />

More, from the general point of view, the same RETURN-CODES are maintained and are<br />

passed to the user programs in the same manner and w<strong>it</strong>h the same location, also the<br />

entire concept of DLI scheduling is reproduced .<br />

Batch programs execution is performed as in the original environment , the same<br />

execution util<strong>it</strong>y name and the same number of parameters w<strong>it</strong>h the same values.<br />

Performance<br />

In sp<strong>it</strong>e of the fundamental differences between the IMS and <strong>H2R</strong> data structures and<br />

although the relational type of <strong>H2R</strong> structure allows more rapid and easier researches,<br />

before very hard or impossible. There are some particular types of data base scrolling that<br />

may result more expensive in the new format.<br />

3


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

The IMS pointer structure allows to find out, w<strong>it</strong>hout a specific data base access, if a son<br />

segment exists or not while the data spl<strong>it</strong>ting into the different tables composing the whole<br />

DBD doesn't give the same possibil<strong>it</strong>y.<br />

The creation of the specific indexes and a possibil<strong>it</strong>y to fulfill the commands of a direct<br />

reading instead of a sequential one allows the efficient obviation of this problem.<br />

The transformation of the data from a hierarchic structure to a relational one simplifies the<br />

resolution of the logical structures that can be directly scrolled w<strong>it</strong>h the use of the specific<br />

indexes.<br />

The possibil<strong>it</strong>y to create the CONSTRAINT restriction between the connected tables<br />

guarantees the integr<strong>it</strong>y of the data.<br />

Util<strong>it</strong>ies<br />

All routines of recovery, backup, reorganization, and optimization are the direct properties<br />

of the target relational DB.<br />

In order to allow the simplified access to the data following the IMS logic and in order to<br />

measure the number and the type of accesses to DB specific util<strong>it</strong>ies are present.<br />

Integration<br />

The DBS resident data are immediately accessible to the old and new programs in the<br />

relational mode.<br />

Both the old IMS mode and the SQL standard data access can be inserted in the same<br />

programs, so that both the integration of the old data w<strong>it</strong>h the new information bases and<br />

the more efficient access to the old information can be obtained.<br />

In case of access the DBS image of the old IMS in insert/update mode directly via SQL, all<br />

the <strong>H2R</strong> rules should be respected; for this reason this type of data approach is retained<br />

inadvisable.<br />

Automatic migration<br />

<strong>H2R</strong> is supplied w<strong>it</strong>h an old structures (PSB and DBD) analyzer, which allows the<br />

collection of all necessary information about the old database.<br />

By means of the standard IMS data export program and basing on the information<br />

captured by the analyzer (see above), <strong>H2R</strong> generates automatically the relational data<br />

bases import programs, taking in consideration all the differences that should be managed<br />

as fields typology, eventual redefin<strong>it</strong>ions, dirty fields, etc.<br />

Regarding the EBCDIC-ASCII conversion, such programs allows to keep some data in the<br />

original EBCDIC format in order to maintain the same elaborative sequences; so that<br />

these data are to be automatically converted runtime at the moment of every access.<br />

4


New structures automatic defin<strong>it</strong>ion and creation<br />

Product overview<br />

Always basing on the information receipted by the analyzer, <strong>H2R</strong> provides to generate<br />

everything that is necessary for the creation of new tables, indexes, constraints, etc.<br />

Information reported by the analyzer is memorized in a number of the relational tables that<br />

can be easily accessed both by the users for consultation and by <strong>H2R</strong> <strong>it</strong>self during the<br />

elaboration phase.<br />

Multiplatform usage<br />

In order to enable the usage of <strong>H2R</strong> on the various environments, <strong>it</strong> was realized<br />

principally in three languages: C, Perl and COBOL.<br />

The parts of the product that are to operate just once during the migration and installation<br />

phase is wr<strong>it</strong>ten mostly in C and Perl and can operate as on UNIX as on LINUX<br />

environments.<br />

All runtime operative parts such as the generated programs are realized in COBOL.<br />

Regarding the level, COBOL Level II actually present on all PC and Mainframe platforms is<br />

used.<br />

Also the data access util<strong>it</strong>ies for the users are realized in COBOL.<br />

Forced modular<strong>it</strong>y<br />

In order to be able to modulate the <strong>H2R</strong> functions and be able to execute the punctual<br />

intervention on the particular DB IMS data access, the way of generating many Cobol<br />

programs for the different Physical, Logical DBD and Indexes access was chosen.<br />

In this manner, also the modification aimed to increase the performances can be applied,<br />

taking advantage of the add<strong>it</strong>ional information furnished by the customer but not present<br />

explic<strong>it</strong>ly in the data defin<strong>it</strong>ion.<br />

Furthermore, in this way the eventual problems w<strong>it</strong>h one of the database don't interfere<br />

w<strong>it</strong>h the others and can be tested and diagnosed separately.<br />

5


Product logical structure<br />

Preliminary elaboration area<br />

The <strong>H2R</strong> activ<strong>it</strong>y can be logically divided in two main phases: the preliminary elaboration<br />

phase and the runtime execution phase. From this point of view two correspondent main<br />

areas of the product can be distinguished. On the flow diagram below the runtime<br />

execution phase is indicated w<strong>it</strong>h the thin double arrows (right upper corner of the design).<br />

Internal tables<br />

The <strong>H2R</strong> functional<strong>it</strong>y is based on the use of the eight internal RDBS tables that contains<br />

all necessary information about the original DL1 structure. The tables have the standard<br />

defin<strong>it</strong>ions and are automatically created at the beginning of the migration project. The<br />

tables are loaded step by step on the base of the information obtained during the analysis<br />

of original Copies, DBD and PSB (follows). Further the information memorized in the<br />

tables is vastly used both in the preliminary and execution phases.<br />

(see appendixes, internal H2r tables description for details)<br />

Copies analyzing<br />

Takes the information from the Cobol copybooks corresponding to the DL1 segments and<br />

memorizes <strong>it</strong> in the COPY_FIELDS internal RDBS table. This table represents just one<br />

side of the global DL1 view and will be taken in consideration during the creation of the<br />

final DBD_COPY internal table together w<strong>it</strong>h other kind of information described further.<br />

DBD/PSB analyzing<br />

Analyzes the DBD source furnished by the customer and loads the DBD-related internal<br />

tables. The obtained information is confronted w<strong>it</strong>h the COPY_FIELDS contents (see<br />

above) and the result is wr<strong>it</strong>ten into the final DBD_COPY internal table.<br />

The same util<strong>it</strong>y analyzes also the PSB sources and memorizes the result in the PSBrelated<br />

tables used only runtime.<br />

IO programs generation<br />

Creates the Cobol IO programs using the information memorized in the DBD-related<br />

internal tables. For each DBD (logical, physical and index) corresponds exactly one IO<br />

program. The programs can be adjusted manually in order to provide the better<br />

performance for some particular cases of IO operation; in this case the customer should<br />

furnish the further information about the internal application logic.<br />

7


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

User tables generation<br />

Creates the user RDBS tables composing the new relational DB that will subst<strong>it</strong>ute the old<br />

IMS base. One table corresponds to one segment. The table structure, the primary and<br />

secondary keys, the eventual constraints and the relations between the different tables are<br />

designed automatically basing on the information memorized in the internal DBD-related<br />

tables.<br />

Conversion rules and load programs generation<br />

The layout structures describing the records to be converted EBCDIC to ASCII are created<br />

automatically; the conversion rules are defined basing on the segment name. In the case<br />

when more layout structures can correspond to one segment, the customer is asked to<br />

provide the add<strong>it</strong>ional recogn<strong>it</strong>ion rules.<br />

The programs to load the data are created basing on the information in internal DBDrelated<br />

tables.<br />

Data Unload Util<strong>it</strong>y and data elaboration<br />

Data are unloaded by the means of the standard Cobol program DBLOADY, converted<br />

using the conversion programs and rules and then loaded by the load programs (see<br />

above).<br />

User programs elaboration<br />

The user programs are slightly modified in order to bring the Mainframe Cobol syntax to a<br />

Microsoft one. The tp programs are also precompiled and EXEC DLI formalism is<br />

expanded into a CALL level one.<br />

IMS format compiling<br />

The IMS is managed w<strong>it</strong>h CICS system, so each XIMS terminal should have the<br />

corresponding XCICS terminal logical name (see DL1 Configuration paragraph bellow).<br />

Main DL1 entry-point CBLTDLI recognizes the type of the request, analyzing the type of<br />

corresponding PCB, and evokes the IMSKERN or DL1KERN programs in front of IMS or<br />

DL1 request (See IMS flow design bellow).<br />

The IMS formats are precompliled, and for each format one mapset file and more<br />

messages files are created.<br />

8


Product logical structure<br />

Runtime execution area<br />

Kernel<br />

The <strong>H2R</strong> Kernel is the central part of the execution phase, a kind of dispatcher that passes<br />

the call to the IO program corresponding to the requested DBD. It decides where the<br />

current DL1 operation requested by the user program is to be directed and how is to be<br />

performed. In the most part of the cases the operation can be executed by<br />

IO programs<br />

IO programs are the application-depended modules, generated automatically in the<br />

preliminary phase, to each DBD corresponds one IO program. The structure of each IO<br />

program reflexes the correspondent DBD structure and contains a number of sections<br />

describing all the eventual DLI operations to execute.<br />

Usually the operation can be performed by means of the statically precompiled EXEC SQL<br />

statement, predisposed in the IO program. The more complex cases are recognized by the<br />

Kernel that composes the relative SQL cond<strong>it</strong>ion and passes <strong>it</strong> as a parameter to the IO<br />

program. In this case the dynamic SQL statement will be executed by the IO program.<br />

The IO programs corresponding to the logical DBD resolve also the logical relations<br />

between the segments.<br />

9


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

<strong>H2R</strong> flow design<br />

10


Product logical structure<br />

IMS flow design<br />

11


<strong>H2R</strong> installation<br />

Prerequis<strong>it</strong>es<br />

For a correct use of h2r, the following software products should be installed on your<br />

computer:<br />

1. XFRAME<br />

2. Correct XFRAME+<strong>H2R</strong> license obtained from H.T.W.C. S.r.l.<br />

3. RDBMS<br />

4. Microfocus ServerExpress 2.x<br />

5. Perl + last version drivers DBI and DBD-RDBMS installed (you can obtain <strong>it</strong> from<br />

CPAN s<strong>it</strong>e)<br />

Installation<br />

<br />

<br />

<br />

<br />

Login as xframe unix user<br />

Make sure that the variables XFRAMEHOME and ORACLE_HOME (in the case of<br />

Oracle using) are set and path to the right directories<br />

Put the h2r.tar.gz file in the home directory of XFRAME (where XFRAMEHOME<br />

points)<br />

Issue the following unix commands:<br />

cd $XFRAMEHOME<br />

gzip -dc h2r.tar.gz | tar xvf -<br />

cd h2r<br />

ksh install.ksh<br />

<br />

<strong>it</strong> creates the h2r directory and install there all <strong>H2R</strong>components.<br />

Create the Oracle user for internal use of <strong>H2R</strong>.<br />

13


Using <strong>H2R</strong> preliminary phase<br />

Environment creation<br />

As mentioned above, the <strong>H2R</strong> activ<strong>it</strong>y includes two main phases: the preliminary<br />

elaboration phase and the runtime execution phase. The first one is to be fulfilled just once<br />

during the migration from the host machine two UNIX. The second is involved in each DL1<br />

operation instead. This chapter is dedicated to the description of the phases, step by step.<br />

Prerequis<strong>it</strong>es<br />

For a correct installation the following software products should be installed on your<br />

computer:<br />

XFRAME and relative standard software components<br />

<strong>H2R</strong><br />

Correct XFRAME + <strong>H2R</strong> license obtained from H.T.W.C. S.r.l.<br />

RDBS<br />

Microfocus ServerExpress 2.x<br />

Perl + last version Perl-RDBS connection drivers (DBD and DBI in the case of<br />

Oracle RDBS) installed (you can obtain <strong>it</strong> from CPAN s<strong>it</strong>e)<br />

UNIX User creation<br />

<br />

<br />

<br />

<br />

Create a UNIX user i.e. 'xprod' w<strong>it</strong>h the same group of the XFRAME user.<br />

Create RDBS user for data tables.<br />

Copy the file 'xframelocal.conf' from home directory of XFRAME to home directory<br />

of xprod and change <strong>it</strong>'s permissions for wr<strong>it</strong>ing.<br />

Ed<strong>it</strong> .profile and comment the instruction<br />

set -u<br />

<br />

if you find <strong>it</strong>.<br />

Add the following lines to your .profile<br />

export JAVA_HOME= # this three lines only if you have<br />

xframe compiled w<strong>it</strong>h java extensions<br />

export SHLIB_PATH=$JAVA_HOME/jre/lib/PA_RISC/hotspot:$SHLIB_PATH<br />

export SHLIB_PATH=$JAVA_HOME/jre/lib/PA_RISC:$SHLIB_PATH<br />

export XFRAMEHOME=<br />

export XKEYHOME=$XFRAMEHOME/xkey<br />

export XFRAME_ARCH=32<br />

. $XFRAMEHOME/xframeenv.conf<br />

export SHLIB_PATH=$XFRAMEHOME/lib:$HOME/lib:$SHLIB_PATH<br />

. ./xframelocal.conf<br />

echo "XFRAMEHOME=$XFRAMEHOME"<br />

export ORACLE_HOME=<br />

export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin<br />

15


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

Environment defin<strong>it</strong>ion<br />

<br />

<br />

<br />

Create RDBMS user for internal <strong>H2R</strong> use<br />

Login as xprod UNIX user.<br />

Ed<strong>it</strong> xframelocal.conf file adding the following settings:<br />

# <strong>H2R</strong> Settings<br />

export <strong>H2R</strong>HOME=$XFRAMEHOME/h2r<br />

export <strong>H2R</strong>PERL=$<strong>H2R</strong>HOME/perl<br />

export <strong>H2R</strong>SCRIPTS=$<strong>H2R</strong>HOME/sql<br />

export SHLIB_PATH=$SHLIB_PATH:$<strong>H2R</strong>HOME/lib<br />

export <strong>H2R</strong>SQLID= # connection w<strong>it</strong>h RDBS for<br />

internal h2r use<br />

export COBPATH=$COBPATH:$<strong>H2R</strong>HOME/release<br />

export PATH=$PATH:$<strong>H2R</strong>HOME/bin:$<strong>H2R</strong>HOME/etc<br />

export DL1PSBLIST=$HOME/etc/psblist # the name of PSB list for XCICS<br />

startup preloading<br />

export DL1SHMEMID=$HOME/DL1_SHMEMID # the name of temporary XCICS shared<br />

memory handler<br />

N.B: The value of <strong>H2R</strong>HOME environment variable depends on the type of <strong>H2R</strong> product<br />

installation. Normally, there are two types of <strong>it</strong>. In the first one, the <strong>H2R</strong> is installed as a<br />

subdirectory of XFRAME product. In the second one, <strong>H2R</strong> is installed as a separate UNIX<br />

user. In any case, <strong>H2R</strong>HOME variable should point to the appropriate directory of the<br />

product installation.<br />

<br />

<br />

Repeat login as xprod UNIX user to make the settings effective.<br />

Make UNIX command:<br />

h2r_create_environment<br />

to create (if not existing) the following directories:<br />

16<br />

src<br />

src/tp<br />

src/bt<br />

bms<br />

sdf<br />

cpy<br />

etc<br />

objs<br />

objs/gnt<br />

objs/int<br />

tmp<br />

data<br />

logs<br />

bin<br />

# original programs (.pre)<br />

# online programs<br />

# batch programs<br />

# original mapsets (.pre)<br />

# compiled mapsets<br />

# copies<br />

# configuration files<br />

# compiled<br />

# compiled for production<br />

# compiled for animation<br />

# temporary files, temporary storage<br />

# VSAM data<br />

# XCICS logs<br />

# product util<strong>it</strong>ies


Using <strong>H2R</strong> preliminary phase<br />

h2r/src<br />

h2r/src/dbd<br />

h2r/src/psb<br />

h2r/tables<br />

h2r/fields<br />

h2r/fields/cpy<br />

h2r/dbd<br />

h2r/psb<br />

h2r/cpy<br />

h2r/load<br />

h2r/load/logs<br />

h2r/load/sql<br />

h2r/load/prog<br />

h2r/import<br />

h2r/import/ebcdic<br />

h2r/import/ascii<br />

h2r/import/diction<br />

h2r/import/diction/errc<br />

h2r/prog_io<br />

# original sources<br />

# original DBD sources (.pre)<br />

# original PSB sources (.pre)<br />

# sql scripts for internal h2r use<br />

# copy fields description<br />

# original copies<br />

# compiled DBD (.sql)<br />

# compiled PSB (.sql)<br />

# copy for IMS/DB segments (one for segment:<br />

_.cpy)<br />

# general load data util<strong>it</strong>ies<br />

# load logs<br />

# sql scripts for data tables generation<br />

# data loading programs<br />

# general data conversion directory<br />

# original data files (one for DBD: )<br />

# converted data files (one for DBD:<br />

.ASC)<br />

# copy for data conversion (one for DBD)<br />

# conversion logs<br />

# <strong>H2R</strong> I-O management programs<br />

Internal h2r tables creation<br />

<br />

<br />

Login as xprod UNIX user.<br />

Make UNIX command:<br />

sqlplus $<strong>H2R</strong>SQLID @$<strong>H2R</strong>SCRIPTS/tables.sql<br />

to create the <strong>H2R</strong> internal tables:<br />

DBD_LIST<br />

DBD_SEGM<br />

DBD_FIELD<br />

DBD_XDFLD<br />

DBD_COPY<br />

PSB_PCB<br />

# DBD list<br />

# segments<br />

# fields<br />

# secondary indexes<br />

# combined DBD and copy fields description<br />

# PCB of PSB<br />

17


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

PSB_SENSEG<br />

COPY_FIELDS<br />

# senseg of PSB<br />

# fields from copy description<br />

18


Using <strong>H2R</strong> preliminary phase<br />

Internal h2r tables loading<br />

DBD and PSB analyzing<br />

Load all the original DBD of IMS/DB into the $HOME/h2r/src/dbd directory, renaming<br />

them as “.dbd”. Each physical DBD should be loaded together w<strong>it</strong>h all<br />

corresponding secondary indexes.<br />

Load all the original PSB of IMS/DB into the $HOME/h2r/src/psb directory, renaming<br />

them as “.psb”.<br />

In order to load the data into internal <strong>H2R</strong> tables tables, the sequence of the ant<br />

commands should be executed from the $HOME/h2r directory. Every ant command<br />

produces and then executes the sql scripts that loads the data in the various tables, as<br />

follows:<br />

ant dbd<br />

For each DBD source in the $HOME/h2r/src/dbd directory; this command<br />

produces the corresponding SQL script .sql in the $HOME/h2r/dbd directory<br />

and executes <strong>it</strong>, loading DBD_SEGM, DBD_FIELD, DBD_XDFLD, DBD_LIST tables.<br />

ant psb<br />

For each PSB source in the $HOME/h2r/src/psb directory; this command<br />

produces the corresponding SQL script .sql in the $HOME/h2r/psb directory<br />

and executes <strong>it</strong>, loading PSB_PCB, PSB_SENSEG tables.<br />

ant dbd_list<br />

produces the “dbd_list.lst” flat file under $HOME/h2r/dbd directory. It is a reference guide<br />

file describing all DBD's.<br />

As to DBD_COPY and COPY_FIELDS tables, the loading of <strong>it</strong> requires the special<br />

procedure that will be described bellow.<br />

Copies analyzing<br />

In this step, the DBD segment copies are analyzed both for user’s RDBS tables defin<strong>it</strong>ion<br />

and for the future data conversion.<br />

For each physical DBD a COPY .cpy in<br />

$HOME/h2r/import/diction directory should be composed from the relative segments<br />

copies in the following form:<br />

01 .<br />

02 filed1.<br />

02 field2.<br />

01 .<br />

02 filed1.<br />

02 field2.<br />

19


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

where 01 levels describe the relative segments’ COPY.<br />

The lists of the segments corresponding to every DBD can be found w<strong>it</strong>h the sqlplus<br />

query:<br />

20<br />

select dbd_name, segm_name from dbd_segm where dbd_name in (‘’,<br />

‘’, … , ‘’)<br />

Physical DBD names list can be obtained by taking the records w<strong>it</strong>h the in<strong>it</strong>ial P flag from<br />

$HOME/h2r/dbd/dbd/list.lst file.<br />

The easier way to create the DBD COPY is to execute a UNIX “cat” command, pasting<br />

together the relative SEGMENTS’ COPIES :<br />

cat “segment1_copy” “segment2_copy” “segmentN_copy” ><br />

$HOME/h2r/import/diction/dbd_name.cpy<br />

For example:<br />

cat PGSATR.cpy PGSCSD.cpy PGSCUM.cpy > h2r/import/diction/TESTDL1.cpy<br />

TESTDL1.cpy:<br />

01 PGSATR.<br />

02 TCSATR-DATA PIC X(6).<br />

02 TTCODICE-1 PIC X(9).<br />

02 TTCODICE-2 PIC X(9).<br />

02 TCSATR-FILLER-1 PIC X(1).<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPROGRESSIVO PIC 9(4).<br />

03 TMPBIN PIC S9(8) COMP REDEFINES TMPROGRESSIVO.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCUM.<br />

02 TCSCUM-TEYCUM PIC X(24).<br />

02 TUPNUM1 PIC 9(5) COMP-3.<br />

02 TUPNUM2 PIC 9(5) COMP-3.<br />

02 TUPNUM3 PIC 9(5) COMP-3.<br />

02 TUPNUM4 PIC 9(5) COMP-3.<br />

02 TUPNUM5 PIC 9(5) COMP-3.<br />

The segment name coincides w<strong>it</strong>h a 01 level field name and <strong>it</strong>s structure defines a<br />

particular record description to use for record conversion and relative RDBS table<br />

defin<strong>it</strong>ion. You have so many record descriptions how many DLI segments are present in<br />

the specified DBD.<br />

In a case when one of the COPY segment contains a REDEFINE instruction, <strong>it</strong> should be<br />

appropriately changed in two or more different 01 levels. The names of new 01 level fields<br />

should have the following format: -. For example,<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPCHAR PIC X(4).<br />

02 TMPBIN PIC S9(8) COMP REDEFINES TMPCHAR.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

should become<br />

01 PGSCSD-XXX.


Using <strong>H2R</strong> preliminary phase<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPCHAR PIC X(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCSD-YYY.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

(the relative suffixes XXX and YYY will be used during the data conversion, see Copy<br />

rules defin<strong>it</strong>ion paragraph below).<br />

Then for each DBD the following sequence of commands should be<br />

executed from $HOME/h2r/import/diction directory:<br />

cpy2xml -o dbd_name.xml dbd_name.cpy<br />

produces a dbd_name.xml intermediate xml-format file under the<br />

$HOME/h2r/import/diction current directory.<br />

xmlconverter –h2r –p –n -o dbd_name.cbl dbd_name.xml<br />

produces a dbd_name.cbl COBOL program for the data conversion (see also Copy rules<br />

creation and Data conversion paragraphs bellow) under the $HOME/h2r/import/diction<br />

directory.<br />

In a case when the multirecord segment is present in DBD (REDEFINE case described<br />

above), this command needs also a –c option:<br />

xmlconverter –h2r –p –n –c copy_rule -o dbd_name.cbl dbd_name.xml<br />

where is the name of the COPY file describing the relative data conversion<br />

management (see Copy rules creation and Data conversion paragraphs below).<br />

xmlconverter -db -o dbd_name.sql dbd_name.xml<br />

produces a dbd_name.sql SQL script for RDBS tables creation under the<br />

$HOME/h2r/import/diction directory. This last command can be executed for all DBD<br />

together by means of “ant” util<strong>it</strong>y.<br />

During the adjusting of a DBD COPY w<strong>it</strong>h the multirecord segments, the first layout (01<br />

level) of the SEGMENT will describe also the corresponding RDBS table. So, from all<br />

possible segment layouts, the more generic one should be chosen, taking in consideration<br />

that the PIC X COBOL type in this case is “good” also for the other types. The chosen<br />

layout should be placed as the first in the 01 level sequence.<br />

For example:<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TMPCHAR PIC X(4) REDEFINES TMPBIN.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

becomes<br />

01 PGSCSD-XXX.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPCHAR PIC X(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

21


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

01 PGSCSD-YYY.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

In this case the layout w<strong>it</strong>h TMPCHAR field of PIC X(4) type is chosen as a generic one<br />

and put at the first place. The RDBS table TESTDL1_PGSCSD will have the following<br />

structure:<br />

TCSCSD_TEYCSD CHAR (22)<br />

TMPCHAR CHAR(4)<br />

TCSCSD_FILLER_1 CHAR(4)<br />

If there is no preexisting layout that describes all the others, <strong>it</strong> should be manually created<br />

by changing the disaccording fields in a new generic PIC X type. For example,<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TMPNUM PIC 9(4) REDEFINES TMPBIN.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

becomes<br />

01 PGSCSD-XXX.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPCHAR PIC X(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCSD-YYY.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCSD-ZZZ.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPNUM PIC 9(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

Or; in the more complicated s<strong>it</strong>uation:<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMP-1.<br />

03 FIELD-1 PIC S9(8) COMP.<br />

03 FIELD-2 PIC X(4).<br />

02 TMP-2 REDEFINES TMP-1.<br />

03 FIELD-3 PIC 9(5) .<br />

03 FIELD-4 PIC S9(5) COMP-3.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

becomes<br />

01 PGSCSD-XXX.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMP-GEN PIC X(8).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

22<br />

01 PGSCSD-YYY.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMP-1.


Using <strong>H2R</strong> preliminary phase<br />

03 FIELD-1 PIC S9(8) COMP.<br />

03 FIELD-2 PIC X(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCSD-ZZZ.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMP-2.<br />

03 FIELD-3 PIC 9(5) .<br />

03 FIELD-4 PIC S9(5) COMP-3.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

Now the COPY_FIELDS internal table should be loaded:<br />

ant copy_fields<br />

For each xml-file .xml created previously in the $HOME/h2r/import/diction<br />

directory (see Copy analyzing paragraph above); this command produces the<br />

corresponding SQL script .sql in the same $HOME/h2r/import/diction<br />

directory. These scripts should be then executed manually:<br />

sqlplus –s $<strong>H2R</strong>SQLID @$HOME/h2r/import/diction/.sql<br />

It loads the COPY_FILEDS internal table.<br />

Copies analyzing for logical DBD<br />

The proceeding of logical DBD needs a manual personalization in order to take in<br />

consideration the different logical connections between the segments. See Appendixes for<br />

the examples of such intervention. However, the corresponding elaboration of your logical<br />

DBD can be provided also directly by HTWC.<br />

DBD and COPIES cross analyzing<br />

For each physical DBD, execute the<br />

h2r_generate_dbd <br />

command.<br />

This command for every DBD dbdname compares the information memorized in the<br />

COPY_FIELDS and DBD_FIELD tables and loads the result into the DBD_COPY internal<br />

table. Also creates for every DBD the following objects:<br />

The segments copies dbdname_segname.cpy in $HOME/h2r/cpy directory<br />

The user RDBS tables’ creation scripts in the $HOME/h2r/load/sql directory<br />

The data load program in the $HOME/h2r/load/prog directory<br />

The IO program in the $HOME/h2r/prog_io directory<br />

The programs are also compiled.<br />

The compilation of some IO programs may require some add<strong>it</strong>ional copybooks, you will<br />

note <strong>it</strong> in the COBOL compiler error messages. That means, that the corresponding<br />

physical DBD has a logical relationship and some virtual segment virtualsegmname is<br />

23


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

present in DBD. In this case the following COPY should be manually created as empty<br />

dummies in the $HOME/prog_io directory<br />

dbdname_DL1_EVALUATE.cpy<br />

dbdname_DL1_WORKING.cpy<br />

dbdname_virtualsegmname.cpy<br />

while the COPY dbdname_PAIRED_virtualsegmname.cpy should just contain the<br />

following instructions:<br />

virtualsegmname-PROCEDURE.<br />

continue.<br />

virtualsegmname-PROCEDURE-EX.<br />

ex<strong>it</strong>.<br />

To control the result, make sure that the following objects were created:<br />

In $HOME/h2r/load/sql directory:<br />

dbdname_CREATE_FOREIGN.sql<br />

dbdname_CREATE_KEYS.sql<br />

dbdname_CREATE_TABLES.sql<br />

dbdname_CREATE_TRIGGERS.sql<br />

dbdname_CREATE_XKEYS.sql<br />

dbdname_DROP_FOREIGN.sql<br />

dbdname_DROP_KEYS.sql<br />

dbdname_DROP_TABLES.sql<br />

dbdname_DROP_TRIGGERS.sql<br />

dbdname_DROP_XKEYS.sql<br />

In $HOME/h2r/load/prog directory:<br />

dbdname_LOAD.pco<br />

dbdname_LOAD.cob<br />

dbdname_LOAD.gnt<br />

In $HOME/h2r/cpy directory:<br />

dbdname_segname.cpy<br />

for every segment segname of DBD dbdname<br />

in the $HOME/h2r/prog_io directory<br />

dbdname.pco<br />

dbdname.cob<br />

dbdname.gnt<br />

24


Using <strong>H2R</strong> preliminary phase<br />

Components generations<br />

IO programs generation<br />

For every dbd is associated a corresponding IO program that manages read and wr<strong>it</strong>e<br />

accesses for this dbd. These COBOL programs are automatically generated in the<br />

$HOME/h2r/ prog_io directory during the DBD analyzing phase (see above) when<br />

executing the h2r_generate_dbd command.<br />

In order to create and compile the programs corresponding to one physical dbd,<br />

h2r_prog_dbd script should be executed.<br />

Users tables scripts generation<br />

For each DBD segment corresponds a user RDBS table. The tables are created w<strong>it</strong>h by<br />

the means of SQL scripts that are automatically generated in the $HOME/h2r/load/sql<br />

directory during the DBD analyzing phase (see above) when executing the<br />

h2r_generate_dbd <br />

command.<br />

For each DBD dbdname there are 10 tables creation scripts:<br />

dbdname_CREATE_TABLES.sql<br />

dbdname_CREATE_KEYS.sql<br />

dbdname_CREATE_XKEYS.sql<br />

dbdname_CREATE_TRIGGERS.sql<br />

dbdname_CREATE_FOREIGN.sql<br />

dbdname_DROP_FOREIGN.sql<br />

dbdname_DROP_TRIGGERS.sql<br />

dbdname_DROP_XKEYS.sql<br />

dbdname_DROP_KEYS.sql<br />

dbdname_DROP_TABLES.sql<br />

creates tables for user data<br />

creates primary keys<br />

creates secondary indexes<br />

creates triggers for secondary keys management<br />

creates FOREIGN KEY constraints for parentage<br />

integr<strong>it</strong>y conservation<br />

drops FOREIGN KEY constraints<br />

drops triggers<br />

drops secondary indexes<br />

drops primary keys<br />

drops user tables<br />

Eventually each of the scripts can be executed separately.<br />

Conversion program generation<br />

In the phase of copies analyzing (see below) the COBOL conversion program is generated<br />

automatically in the $HOME/h2r/import/diction directory, taking consideration of the<br />

internal DBD copy structure. The segment name of a DBD COPY coincides w<strong>it</strong>h a 01 level<br />

field name and <strong>it</strong>s structure defines a particular record description to use for record<br />

conversion and relative RDBS table defin<strong>it</strong>ion. For example,<br />

25


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

TESTDL1.cpy:<br />

26<br />

01 PGSATR.<br />

02 TCSATR-DATA PIC X(6).<br />

02 TTCODICE-1 PIC X(9).<br />

02 TTCODICE-2 PIC X(9).<br />

02 TCSATR-FILLER-1 PIC X(1).<br />

01 PGSCSD.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPROGRESSIVO PIC 9(4).<br />

03 TMPBIN PIC S9(8) COMP<br />

REDEFINES TMPROGRESSIVO.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCUM.<br />

02 TCSCUM-TEYCUM PIC X(24).<br />

02 TUPNUM1 PIC 9(5) COMP-3.<br />

02 TUPNUM2 PIC 9(5) COMP-3.<br />

02 TUPNUM3 PIC 9(5) COMP-3.<br />

02 TUPNUM4 PIC 9(5) COMP-3.<br />

02 TUPNUM5 PIC 9(5) COMP-3.<br />

this DBD COPY describes three different segments w<strong>it</strong>h three different layouts to<br />

distinguish during the conversion of a single DBD record. You have so many record<br />

descriptions how many DLI segments. Each record of unloaded data has 16 bytes header<br />

where the DBD name and segment name are stored. So the segment name is translated<br />

from EBCDIC to ASCII and then tested in order to use the corresponding record<br />

description to convert the area.<br />

Conversion rules defin<strong>it</strong>ion<br />

In a case when a multirecord segment is present in a DBD, the related record has more<br />

conversion layouts. Which layout should be taken during the conversion of a single record,<br />

is determined by the internal application rules regard the value of one or more fields of the<br />

record. For example,<br />

01 PGSCSD-XXX.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPCHAR PIC X(4).<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

01 PGSCSD-YYY.<br />

02 TCSCSD-TEYCSD PIC X(22).<br />

02 TMPBIN PIC S9(8) COMP.<br />

02 TCSCSD-FILLER-1 PIC X(4).<br />

defines two different layouts of the record. To chose the “right” one, we may have for<br />

example the following rule: when TCSCSD-FILLER-1 is filled w<strong>it</strong>h the spaces, the first<br />

layout (w<strong>it</strong>h the suffix XXX) should be taken, otherwise the second one (w<strong>it</strong>h the suffix<br />

YYY) is used.<br />

In order to manage the multirecord conversion, the relative COPY should be manually<br />

created for each multirecord segment. This COPY is automatically included in the<br />

corresponding COBOL conversion program, the name of the COPY is decided during the<br />

creation of this program (see Copy analyzing paragraph above). In the COPY the rules of<br />

conversion should be described w<strong>it</strong>h the means of an appropriate COBOL instruction.<br />

After the recogn<strong>it</strong>ion of the rule the segment layout suffix (‘XXX’ or ‘YYY’ in our case)


Using <strong>H2R</strong> preliminary phase<br />

should be moved to the special buffer field w<strong>it</strong>h the fixed name RECORD-XX. For<br />

example,<br />

IF TCSCSD-FILLER-1 = X”40404040” THEN<br />

MOVE 'XXX' TO RECORD-XX<br />

ELSE<br />

MOVE 'YYY' TO RECORD-XX<br />

END-IF<br />

The test of the value (for the character fields) should be executed in EBCDIC, otherwise<br />

the field should be moved to a fixed buffer ASCII-BUF and converted in ASCII.<br />

If you have more than one multirecord segment in one DBD, the segment name should be<br />

also specified in the rule testing, as follows:<br />

IF XCONV-SEGNAME = ‘PGSCSD’ THEN<br />

IF TCSCSD-FILLER-1 = X”40404040” THEN<br />

MOVE 'XXX' TO RECORD-XX<br />

ELSE<br />

MOVE 'YYY' TO RECORD-XX<br />

END-IF<br />

END-IF<br />

where the XCONV-SEGNAME is a special internal field, containing always the current<br />

segment name in ACSII.<br />

Load programs generation<br />

This group of programs is necessary for loading of converted data into RDB. For every<br />

physical DBD corresponds one load program under $HOME/h2r/load/prog directory. You<br />

need no execution scripts to create them; they are already created automatically during<br />

the previous "DBD and COPIES cross requires" phase by h2r_generate_dbd command.<br />

User program elaboration<br />

User programs are elaborated w<strong>it</strong>h the standard XFRAME xcob util<strong>it</strong>y w<strong>it</strong>h "- k" add<strong>it</strong>ional<br />

option in order to proceed the BLL pointer management. The online DLI programs need<br />

also a "-O EXECDLI" parameter that invokes the dl1exec precompiler that expanses<br />

EXEC DLI formalism (if present) into a CALL level one.<br />

The batch DLI programs need a "-O DL1PRE" parameter that invokes the dl1pre<br />

precompiler that controls the entry-point DLITCBL management by adding a GOBACK<br />

target after PROCEDURE DIVISION.<br />

IMS formats compiling<br />

The IMS formats should be unloaded from host into the $HOME/fmt UNIX directory. In<br />

order to compile an IMS format , go to $HOME/fmt directory and execute<br />

the following command:<br />

fmt2bms .ims<br />

This command will produce the .bms SDF mapset file and a number of<br />

.msg … .msg messages files.<br />

27


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

28


Using <strong>H2R</strong> preliminary phase<br />

Data management<br />

Data unloading<br />

Data unload is executed on the HOST machine by DBDLOADY standard program. For<br />

each physical DBD, this program should be appropriately adapted and then executed. A<br />

sequential fixed-length file is created for every DBD. The file name should coincide w<strong>it</strong>h<br />

the physical DBD name.<br />

The record format is the following: DBD_NAME (8 byte), SEGMENT_NAME (8 byte),<br />

record contents in a binary format. So for each DBD, the file length should be calculated<br />

as the maximum segments' length of the DBD plus the 16 bytes of a header. After the<br />

appropriate adaptation regard the record length and the DBD name, the program will scroll<br />

the entire DBD reading the segments w<strong>it</strong>h GN command, memorizing them and filling the<br />

eventual rest of a record w<strong>it</strong>h the spaces up to the maximum length. The result of<br />

unloading should be transferred in the binary format on the UNIX machine and put into<br />

$HOME/h2r/import/ebcdic directory.<br />

Data conversion<br />

Put the unloaded IMS/DB original archives into $HOME/h2r/import/ebcdic directory w<strong>it</strong>h<br />

the name “”, ftp should be executed in a binary mode w<strong>it</strong>hout any type of<br />

conversion. The format of the file is described above in "Data unloading" section.<br />

Naturally the son-segment records should be pos<strong>it</strong>ioned directly below the corresponding<br />

father-segment record.<br />

Go to the conversion directory:<br />

cd $HOME/h2r/import/diction<br />

Compile a conversion program:<br />

cob -u .cbl<br />

Fulfill the conversion:<br />

xvsamRts ../ebcdic/ ../ascii/.ASC<br />

User tables creation and data loading<br />

A “dbd_len” file should be created in the $HOME/h2r/load directory. This file should<br />

contain the list of physical DBDs; for each DBD the maximum segments' size of this DBD<br />

should be calculated, and the 16 bytes added. The result is to be wr<strong>it</strong>ten in the “dbd_len"<br />

after the DBD name, as follows:<br />

CT9589 2064<br />

…<br />

CT9589 - physical DBD name, 2048 - this DBD largest segment's length, 2064 = 2048 +<br />

16<br />

Open $HOME/h2r/load/loadall.ksh file and control the value of XRUN_LIBRARIES<br />

environment variable on the line<br />

29


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

export XRUN_LIBRARIES=$<strong>H2R</strong>HOME/lib/libdl1.so<br />

Make sure that the extension of shared library corresponds to your system syntaxes. If not,<br />

change <strong>it</strong>.<br />

Execute<br />

ksh loadall.ksh<br />

command from $HOME/h2r/load directory to start the data loading. The log files<br />

“.log” will be s<strong>it</strong>uated in the $HOME/h2r/load/logs directory.<br />

The duration of this operation depends on the IMS/DB archives’ size and may require<br />

much time. In this case the background mode of execution is advised:<br />

nohup ksh loadall.ksh> loadall.log&<br />

User program elaboration<br />

User programs are elaborated w<strong>it</strong>h the standard XFRAME xcob util<strong>it</strong>y w<strong>it</strong>h "- k" add<strong>it</strong>ional<br />

option in order to proceed the BLL pointer management. To compile the online DLI<br />

program , enter<br />

xcob –ksua <br />

The online DLI programs w<strong>it</strong>h EXEC DLI instructions need also a "-O EXECDLI"<br />

parameter that invokes the dl1exec precompiler that expanses EXEC DLI formalism into a<br />

CALL level one:<br />

xcob –ksua -O EXECDLI <br />

The batch DLI programs need a "-O DL1PRE" parameter that invokes the dl1pre<br />

precompiler that controls the entry-point DLITCBL management by adding a GOBACK<br />

target after PROCEDURE DIVISION:<br />

xcob –kbsua -O DL1PRE <br />

30


Using <strong>H2R</strong> Run-time phase<br />

User guide<br />

Flow<br />

DL/1 IMS/DB configuration<br />

Ed<strong>it</strong> the XCICS configuration file $HOME/etc/xcics.conf and put the right values for RDBS<br />

connection taking in consideration your system shared library extension<br />

load library=$<strong>H2R</strong>HOME/lib/libdl1.so;<br />

define dbc name=DL1, database=..., user=..., password=...;<br />

bind dbc=DL1 default;<br />

Set use_dl1 and use_rdbms flags to yes.<br />

set use_rdbms=yes;<br />

set use_dli=yes;<br />

31


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

IMS/DC configuration<br />

Create $HOME/etc/psblist file and put there the list of all PSB used for online programs of<br />

your application, as follows:<br />

<br />

<br />

<br />

The first time that CICS starts, <strong>it</strong> will create a shared memory, containing all PSB<br />

information; that will take some time. In future, in order to optimize the starting<br />

performances, CICS will execute the “warm” start, using the existing shared memory. If for<br />

some reason (for example a PSB added) you will need to recreate and reload the shared<br />

memory, execute the following command before starting your CICS session:<br />

h2r_reload_psb<br />

CICS in this moment should be down.<br />

Ed<strong>it</strong> also your xjobin<strong>it</strong>.csh batch configuration file in $HOME/etc directory, adding the h2r<br />

library setting and taking in consideration your system shared library extension.<br />

sentenv XRUN_LIBRARIES “$<strong>H2R</strong>HOME/lib/libdl1.so”<br />

If you work w<strong>it</strong>h IMS, add following configurations in $HOME/etc/xcics.conf file:<br />

load library=$<strong>H2R</strong>HOME/lib/libims.so;<br />

set use_xims=yes;<br />

set xims_format_path=$HOME/fmt;<br />

Then define a set of IMS terminals mapped to CICS logical terminals, as follows:<br />

define ims_terminal name=….., cics=….;<br />

where name is a IMS internal name up to 8 characters, and cics parameter value define<br />

the external logical name of a terminal that will be recognized by XCICS.<br />

Define a set of IMS transactions, as follows:<br />

define ims_transaction name=……, program=……, spa=…., psb=……;<br />

where for every transaction name corresponds the program, the psb and spa length.<br />

For example,<br />

define ims_terminal name=TN00IMS0, cics=TN00;<br />

define ims_transaction name=IVTCB, program=DFSIVA34, spa=80, psb=DFSIVP34;<br />

DL1 BATCH runtime sw<strong>it</strong>ches<br />

DL1 batch program is called from a job using a standard header module, such as<br />

DFSRRC00, DLZRRC00, DLZMPI00. These modules are put under $HOME/bin directory<br />

and can be personalized for the needs of application. These csh scripts read the first<br />

string<br />

DLI,,<br />

from the standard input and recognize the parameters.<br />

32


Using <strong>H2R</strong> Run-time phase<br />

In order to call the corresponding batch program the following command is performed:<br />

xvsamRts DL1DSPT $SQLID [add<strong>it</strong>ional parameters]<br />

The add<strong>it</strong>ional parameters may have one or more of the following values:<br />

-T application program w<strong>it</strong>h DLITCBL entry point<br />

-t SQL trace activation<br />

-s prints execution DL1 statistics on ex<strong>it</strong><br />

-b program w<strong>it</strong>h the first PCB of BMP type<br />

-a COBOL animation sw<strong>it</strong>ch<br />

-L<br />

-l<br />

defines original program language<br />

defines debug level for log file<br />

$XVSAM/.log<br />

00 – no log file created<br />

15 – full logging level<br />

33


Appendixes<br />

<strong>H2R</strong> util<strong>it</strong>ies<br />

UNIX scripts<br />

h2r_create_environment<br />

This command creates the <strong>H2R</strong> environment directories, if not existing.<br />

h2r_dbd <br />

This command analyses a DBD source $HOME/h2r/dbd/src/.dbd and loads<br />

the result in a dbd-related tables DBD_LIST, DBD_SEGM, DBD_FIELD, DBD_XDFLD<br />

h2r_psb <br />

This command analyses a PSB source $HOME/h2r/psb/src/.psb and loads<br />

the result in a psb-related tables PSB_PCB, PSB_SENSEG<br />

h2r_create_dbd_list<br />

Creates a guide list of DBDs: $HOME/h2r/dbd/dbd_list.lst<br />

h2r_generate_dbd <br />

For physical DBD , this command compares the information memorized in<br />

the COPY_FIELDS and DBD_FIELD internal tables and loads the result into the<br />

DBD_COPY internal table. Also creates for this DBD the following objects (the programs<br />

are also compiled):<br />

<br />

<br />

<br />

<br />

<br />

the segments copies dbdname_segname.cpy in $HOME/h2r/cpy directory<br />

the user RDBS tables’ creation scripts in the $HOME/h2r/load/sql directory<br />

the data load program in the $HOME/h2r/load/prog directory<br />

he IO program in the $HOME/h2r/prog_io directory<br />

the IO programs for all corresponding secondary indexes in the<br />

$HOME/h2r/prog_io directory<br />

h2r_generate_all_dbd<br />

Launch a h2r_generate_dbd script for each physical DBD from a dbd_list.lst<br />

h2r_prog_dbd <br />

This command creates and compiles the programs corresponding to the physical DBD<br />

and <strong>it</strong>’s secondary indexes<br />

h2r_prog_all_dbd<br />

Launch a h2r_prog_dbd script for each physical DBD from a dbd_list.lst<br />

h2r_reload_psb<br />

This command reloads the shared memory, containing all PSB information for the CICS<br />

usage.<br />

35


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

36<br />

load.ksh <br />

This command loads the user data. It should be executed from $HOME/h2r/load directory.<br />

The log files .log will be s<strong>it</strong>uated in the $HOME/h2r/load/logs directory.<br />

fmt2bms .imsFrom the IMS format .ims, this command<br />

produces the .bms SDF mapset file and a number of<br />

.msg … .msg messages files.<br />

SQL scripts<br />

sqlplus –s $<strong>H2R</strong>SQLID @$<strong>H2R</strong>SCRIPTS/tables.sql<br />

creates the <strong>H2R</strong> internal tables.<br />

sqlplus –s $<strong>H2R</strong>SQLID @$HOME/h2r/import/diction/.sql<br />

Loads the COPY_FILEDS internal table for a DBD <br />

Ant util<strong>it</strong>ies<br />

ant dbd<br />

For each DBD source in the $HOME/h2r/src/dbd directory; this command<br />

produces the corresponding SQL script .sql in the $HOME/h2r/dbd directory<br />

and executes <strong>it</strong>, loading DBD_SEGM, DBD_FIELD, DBD_XDFLD, DBD_LIST tables.<br />

ant psb<br />

For each PSB source in the $HOME/h2r/src/psb directory; this command<br />

produces the corresponding SQL script .sql in the $HOME/h2r/psb directory<br />

and executes <strong>it</strong>, loading PSB_PCB, PSB_SENSEG tables.<br />

ant copy_fields<br />

For each xml-file .xml in the $HOME/h2r/import/diction directory; this<br />

command produces the corresponding SQL script .sql in the same<br />

$HOME/h2r/import/diction directory.<br />

ant dbd_list<br />

produces the dbd_list.lst flat file under $HOME/h2r/dbd directory. It is a reference guide<br />

file describing all DBD's.<br />

Miscellanea<br />

The following util<strong>it</strong>ies should be starter from $HOME/h2r/import/diction directory:<br />

cpy2xml -db -o dbd_name.xml dbd_name.cpy<br />

Produce a .xml copy description.<br />

xmlconverter –h2r –p –n -o dbd_name.cbl dbd_name.xml<br />

Produce a COBOL data conversion program<br />

xmlconverter –h2r –p –n -o –c copy_rule dbd_name.cbl dbd_name.xml


Appendixes<br />

Produce a COBOL data conversion program for a multirecord case<br />

xmlconverter -db -o dbd_name.sql dbd_name.xml<br />

Produce a .sql script for a COPY_FIELDS loading<br />

37


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

Internal H2r tables description<br />

DBD_SEGM<br />

38<br />

DBD_NAME CHAR(8) NOT NULL<br />

SEGM_NAME CHAR(8) NOT NULL<br />

PARENT_NAME CHAR(8)<br />

PARENT_TYPE CHAR(4)<br />

LPARENT_NAME CHAR(8)<br />

LPARENT_DBD CHAR(8)<br />

BYTES_MIN INT<br />

BYTES_MAX INT<br />

SEGM_POINTER CHAR(8)<br />

RELATI_RULES CHAR(8)<br />

INSERT_RULES CHAR(8)<br />

COMPRTN_NAME CHAR(8)<br />

COMPRTN_TYPE CHAR(1)<br />

COMPRTN_INIT CHAR(4)<br />

SOURCE_SEGM CHAR(8)<br />

SOURCE_TYPE CHAR(1)<br />

SOURCE_DBD CHAR(8)<br />

LSOURCE_SEGM CHAR(8)<br />

LSOURCE_TYPE CHAR(1)<br />

LSOURCE_DBD CHAR(8)<br />

USED CHAR(1)<br />

PRIMARYKEY (DBD_NAME, SEGM_NAME)<br />

DBD_FIELD<br />

DBD_NAME CHAR(8) NOT NULL<br />

SEGM_NAME CHAR(8) NOT NULL<br />

FIELD_NAME CHAR(8) NOT NULL<br />

FIELD_SEQ CHAR(1)<br />

FIELD_BYTES INT<br />

FIELD_START INT<br />

FIELD_TYPE CHAR(1)<br />

PRIMARYKEY (DBD_NAME, SEGM_NAME, FIELD_NAME)<br />

DBD_XDFLD<br />

DBD_NAME CHAR(8) NOT NULL<br />

SEGM_NAME CHAR(8) NOT NULL<br />

PROCSEQ CHAR(8) NOT NULL<br />

KEY_NAME CHAR(8)<br />

SEGMENT CHAR(8)<br />

SRCH VARCHAR2(1024)<br />

SUBSEQ VARCHAR2(1024)<br />

NULLVAL CHAR(5)


Appendixes<br />

DBD_LIST<br />

EXTRTN CHAR(8)<br />

PRIMARYKEY (DBD_NAME, SEGM_NAME, PROCSEQ)<br />

DBD_NAME CHAR(8) NOT NULL<br />

DBD_ACCESS CHAR(8)<br />

PRIMARYKEY (DBD_NAME)<br />

DBD_COPY<br />

PSB_PCB<br />

DBD_NAME CHAR(8)<br />

SEGM_NAME CHAR(8)<br />

FIELD_NAME VARCHAR2(100)<br />

FIELD_LEVEL NUMBER<br />

FIELD_START NUMBER<br />

FIELD_LENGTH NUMBER<br />

FIELD_TYPE CHAR(1)<br />

FIELD_DEC NUMBER<br />

FIELD_SIGN NUMBER<br />

FIELD_USE CHAR(1)<br />

BODY_START NUMBER<br />

<strong>H2R</strong>_TYPE CHAR(1)<br />

KEY_SEQ CHAR(1)<br />

SEGM_LEVEL NUMBER<br />

CONCAT_START NUMBER<br />

REFERENCE_NAME VARCHAR2(1000)<br />

REFERENCE_START NUMBER<br />

PRIMARYKEY (DBD_NAME, SEGM_NAME, FIELD_START)<br />

PSB_NAME CHAR(8)NOTNULL<br />

DBD_NAME CHAR(8)<br />

PCB_NUM INTNOTNULL<br />

PCB_TYPE CHAR(2)<br />

PCB_PROCOPT CHAR(8)<br />

PCB_KEYLEN INT<br />

PCB_PROCSEQ CHAR(8)<br />

PCB_POSCHAR (1)<br />

PCB_LTERM CHAR(8)<br />

PCB_NAME CHAR(8)<br />

PCB_ALTRESP CHAR(1)<br />

PCB_SAMETRM CHAR(1)<br />

PCB_MODIFY CHAR(1)<br />

PCB_EXPRESS CHAR(1)<br />

PCB_PCBNAME CHAR(8)<br />

PCB_LIST CHAR(1)<br />

PRIMARYKEY (PSB_NAME, PCB_NUM)<br />

39


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

PSB_SENSEG<br />

PSB_NAME CHAR(8)<br />

PCB_NUM INT<br />

SEGM_NUM INT<br />

SEGM_NAME CHAR(8)<br />

PARENT_NAME CHAR(8)<br />

SEGM_PROCOPT CHAR(4)<br />

PRIMARYKEY (PSB_NAME, PCB_NUM, SEGM_NUM)<br />

COPY_FIELDS<br />

NOME_OGGETTO CHAR(8)<br />

CAMPO VARCHAR2(100)<br />

CAMPO01 VARCHAR2(100)<br />

REDEFINE VARCHAR2(100)<br />

STRUTTURA NUMBER(1)<br />

LIVELLO NUMBER(2)<br />

OCCURS NUMBER<br />

TIPO CHAR(1)<br />

BYTES NUMBER<br />

DECIMALI NUMBER<br />

SEGNATO NUMBER<br />

USO CHAR(1)<br />

40


Appendixes<br />

Logical DBD elaboration: example<br />

So, let us consider the following physical DBD HTDBDDA :<br />

DBD NAME=HTDBDDA, *<br />

ACCESS=HIDAM<br />

DATASET DD1=PNDATA, *<br />

DEVICE=3390,SIZE=6144, *<br />

FRSPC=(10,10),SCAN=19<br />

SEGM NAME=PNITM, *<br />

BYTES=120, *<br />

PARENT=0, *<br />

PTR=TB, *<br />

RULES=PPV<br />

LCHILD NAME=(PNIND,HTDBDIN), *<br />

PTR=INDX<br />

LCHILD NAME=(PNIPT,HTDBDDA), *<br />

PTR=DBLE, *<br />

PAIR=PNIVP<br />

FIELD NAME=(PNITMPN,SEQ,U), *<br />

BYTES=19, *<br />

START=1<br />

FIELD NAME=PNITMCP, *<br />

BYTES=10, *<br />

START=22<br />

SEGM NAME=PNIVP, *<br />

PARENT=PNITM, *<br />

SOURCE=((PNIPT,,HTDBDDA)), *<br />

PTR=PAIRED<br />

FIELD NAME=(PNIVPKEY,SEQ,U), *<br />

BYTES=54, *<br />

START=1<br />

SEGM NAME=PNPST, *<br />

BYTES=50, *<br />

PARENT=PNITM, *<br />

PTR=TB<br />

FIELD NAME=(PNPSTPN,SEQ,U), *<br />

BYTES=35, *<br />

START=1<br />

SEGM NAME=PNIPT, *<br />

BYTES=19, *<br />

PARENT=((PNPST),(PNITM,V,HTDBDDA)), *<br />

PTR=(T,LTB), *<br />

RULES=PVV<br />

DBDGEN<br />

FINISH<br />

END<br />

This physical DBD contains two normal segments PNIPT and PNPST, one virtual segment<br />

PNIVP and one pointer PNIPT. Follows the COPY of this DBD in<br />

$HOME/h2r/import/diction directory :<br />

01 PNITM.<br />

02 PNITMPN PIC X(19).<br />

02 FILLER-1 PIC X(2).<br />

02 PNITMCP PIC X(10).<br />

02 FILLER-2 PIC X(1).<br />

02 ITMUMC PIC X(2).<br />

02 PARTPN PIC X(86).<br />

41


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

01 PNPST.<br />

01 PNIPT.<br />

02 PNPSTPN PIC X(35).<br />

02 SSTRVAL2 PIC X(4).<br />

02 SSTRQTA PIC 9(9) COMP-3.<br />

02 SSTRCALT PIC X(1).<br />

02 SSTRKIT PIC X(1).<br />

02 SSTRTC PIC X(1).<br />

02 SSTRPROV PIC X(3).<br />

02 PNWUI PIC X(19).<br />

In relation w<strong>it</strong>h this DBD we have two logical DBD’s HTDBDEX for the explosion and<br />

HTDBDIM for the implosion:<br />

DBD NAME=HTDBDEX,ACCESS=LOGICAL<br />

DATASET LOGICAL<br />

SEGM NAME=PNITM,PARENT=0,SOURCE=((PNITM,,HTDBDDA))<br />

SEGM NAME=PNPST,PARENT=PNITM,SOURCE=((PNPST,,HTDBDDA))<br />

SEGM NAME=PNEXP,PARENT=PNPST, *<br />

SOURCE=((PNIPT,,HTDBDDA),(PNITM,,HTDBDDA))<br />

DBDGEN<br />

FINISH<br />

END<br />

DBD NAME=HTDBDIM,ACCESS=LOGICAL<br />

DATASET LOGICAL<br />

SEGM NAME=PNITM,PARENT=0,SOURCE=((PNITM,,HTDBDDA))<br />

SEGM NAME=PNIMP,PARENT=PNITM, *<br />

SOURCE=((PNIVP,,HTDBDDA),(PNPST,,HTDBDDA))<br />

SEGM NAME=PNWUI,PARENT=PNIMP,SOURCE=((PNITM,,HTDBDDA))<br />

DBDGEN<br />

FINISH<br />

END<br />

In order to elaborate these DBD, you should create the corresponding COPY under<br />

$HOME/h2r/import/diction :<br />

HTDBDEX:<br />

01 PNITM.<br />

02 PNITMPN PIC X(19).<br />

02 FILLER-1 PIC X(2).<br />

02 PNITMCP PIC X(10).<br />

02 FILLER-2 PIC X(1).<br />

02 ITMUMC PIC X(2).<br />

02 PARTPN PIC X(86).<br />

01 PNPST.<br />

02 PNPSTPN PIC X(35).<br />

02 SSTRVAL2 PIC X(4).<br />

02 SSTRQTA PIC 9(9) COMP-3.<br />

02 SSTRCALT PIC X(1).<br />

02 SSTRKIT PIC X(1).<br />

02 SSTRTC PIC X(1).<br />

02 SSTRPROV PIC X(3).<br />

42<br />

01 PNEXP.<br />

02 PNITMPN PIC X(19).<br />

02 FILLER-1 PIC X(2).<br />

02 PNITMCP PIC X(10).<br />

02 FILLER-2 PIC X(1).


Appendixes<br />

02 ITMUMC PIC X(2).<br />

02 PARTPN PIC X(86).<br />

HTDBDIM:<br />

01 PNITM.<br />

02 PNITMPN PIC X(19).<br />

02 FILLER-1 PIC X(2).<br />

02 PNITMCP PIC X(10).<br />

02 FILLER-2 PIC X(1).<br />

02 ITMUMC PIC X(2).<br />

02 PARTPN PIC X(86).<br />

01 PNIMP.<br />

02 PNITMKEY PIC X(19).<br />

02 PNPSTKEY PIC X(35).<br />

02 PNPSTPN PIC X(35).<br />

02 SSTRVAL2 PIC X(4).<br />

02 SSTRQTA PIC 9(9) COMP-3.<br />

02 SSTRCALT PIC X(1).<br />

02 SSTRKIT PIC X(1).<br />

02 SSTRTC PIC X(1).<br />

02 SSTRPROV PIC X(3).<br />

01 PNWUI.<br />

02 PNITMPN PIC X(19).<br />

02 FILLER-1 PIC X(2).<br />

02 PNITMCP PIC X(10).<br />

02 FILLER-2 PIC X(1).<br />

02 ITMUMC PIC X(2).<br />

02 PARTPN PIC X(86).<br />

Create pared.sql file in $HOME/h2r/dbd directory or ed<strong>it</strong> an existed one adding the<br />

following information:<br />

delete from DBD_FIELD where<br />

DBD_NAME='HTDBDDA' and SEGM_NAME = 'PNIPT' and FIELD_NAME = 'PNWUI';<br />

insert into DBD_FIELD values (<br />

'HTDBDDA',<br />

'PNIPT',<br />

'PNWUI',<br />

'S',<br />

19,<br />

1,<br />

'C'<br />

);<br />

comm<strong>it</strong>;<br />

qu<strong>it</strong>;<br />

and execute <strong>it</strong>:<br />

sqlplus $<strong>H2R</strong>SQLID @$HOME/h2r/dbd/pared.sql<br />

Create manually the following copies under $HOME/h2r/prog_io directory:<br />

HTDBDDA_DL1_EVALUATE.cpy – empty dummy<br />

HTDBDDA_DL1_WORKING.cpy – empty dummy<br />

HTDBDDA_PNIVP.cpy– empty dummy<br />

HTDBDDA_PAIRED_PNIVP.cpy –<br />

43


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

44<br />

PNIVP-PROCEDURE.<br />

continue.<br />

PNIVP-PROCEDURE-EX.<br />

ex<strong>it</strong>.<br />

HTDBDIM_DL1_EVALUATE.cpy<br />

IF SEGM-NAME = "PNITM" THEN<br />

MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1)<br />

END-IF.<br />

HTDBDIM_DL1_WORKING.cpy<br />

01 HT-APPO PIC X(54).<br />

EXEC SQL INCLUDE HTDBDDA_PNPST.cpy END-EXEC.<br />

HTDBDIM_PAIRED_PNWUI.cpy<br />

PNWUI-PROCEDURE.<br />

EVALUATE COP<br />

WHEN "READFRST"<br />

PERFORM PNWUI-READ-FIRST THRU PNWUI-READ-FIRST-EX<br />

WHEN "READNEXT"<br />

PERFORM PNWUI-READ-NEXT THRU PNWUI-READ-NEXT-EX<br />

WHEN "READCOND"<br />

PERFORM PNWUI-READ-COND THRU PNWUI-READ-COND-EX<br />

WHEN "READPATH"<br />

PERFORM PNWUI-READ-COND THRU PNWUI-READ-COND-EX<br />

WHEN "ISRT "<br />

PERFORM PNWUI-ISRT THRU PNWUI-ISRT-EX<br />

WHEN "DLET "<br />

PERFORM PNWUI-DLET THRU PNWUI-DLET-EX<br />

WHEN "REPL "<br />

PERFORM PNWUI-REPL THRU PNWUI-REPL-EX<br />

END-EVALUATE.<br />

PNWUI-PROCEDURE-EX.<br />

EXIT.<br />

PNWUI-READ-FIRST.<br />

move KT-KEY-CHAR (1) (100 : 19) to PNITM-PNITMPN.<br />

IF FLAG-HOLD = 'Y' THEN<br />

EXEC SQL SELECT * INTO :PNITM FROM HTDBDDA_PNITM<br />

WHERE<br />

DBD_PNITMPN = :PNITM-PNITMPN<br />

FOR UPDATE<br />

END-EXEC<br />

ELSE<br />

EXEC SQL SELECT * INTO :PNITM FROM HTDBDDA_PNITM<br />

WHERE<br />

DBD_PNITMPN = :PNITM-PNITMPN<br />

END-EXEC<br />

END-IF.<br />

IF SQLCODE = 0 THEN


Appendixes<br />

MOVE PNITM-PNITMPN(1 : 19) TO KT-KEY-CHAR(3)(1:19)<br />

MOVE 'Y' TO DL1-LAST-KEY-DEF<br />

MOVE 38 TO DL1-FB-KEY-LEN<br />

MOVE PNITM-PNITMPN(1 : 19) TO DL1-FB-KEY-ARR(20 : 19)<br />

MOVE PNITM (1 : 120) TO DL1-IO-AREA (1 : 120)<br />

ELSE<br />

MOVE 'N' TO DL1-LAST-KEY-DEF<br />

END-IF.<br />

MOVE SQLCODE TO RETCODE.<br />

PNWUI-READ-FIRST-EX.<br />

EXIT.<br />

PNWUI-READ-NEXT.<br />

MOVE 1000100 TO RETCODE.<br />

PNWUI-READ-NEXT-EX.<br />

EXIT.<br />

PNWUI-READ-COND.<br />

PERFORM PNWUI-READ-FIRST THRU PNWUI-READ-FIRST-EX.<br />

PNWUI-READ-COND-EX.<br />

EXIT.<br />

PNWUI-ISRT.<br />

MOVE 1000102 TO RETCODE.<br />

PNWUI-ISRT-EX.<br />

EXIT.<br />

PNWUI-DLET.<br />

MOVE 1000102 TO RETCODE.<br />

PNWUI-DLET-EX.<br />

EXIT.<br />

PNWUI-REPL.<br />

MOVE 1000102 TO RETCODE.<br />

PNWUI-REPL-EX.<br />

EXIT.<br />

HTDBDIM_PAIRED_PNIMP.cpy<br />

PNIMP-PROCEDURE.<br />

EVALUATE COP<br />

WHEN "READFRST"<br />

PERFORM PNIMP-READ-FIRST THRU PNIMP-READ-FIRST-EX<br />

WHEN "READNEXT"<br />

PERFORM PNIMP-READ-NEXT THRU PNIMP-READ-NEXT-EX<br />

WHEN "READCOND"<br />

PERFORM PNIMP-READ-COND THRU PNIMP-READ-COND-EX<br />

WHEN "READPATH"<br />

PERFORM PNIMP-READ-COND THRU PNIMP-READ-COND-EX<br />

WHEN "ISRT "<br />

PERFORM PNIMP-ISRT THRU PNIMP-ISRT-EX<br />

WHEN "DLET "<br />

PERFORM PNIMP-DLET THRU PNIMP-DLET-EX<br />

WHEN "REPL "<br />

PERFORM PNIMP-REPL THRU PNIMP-REPL-EX<br />

END-EVALUATE.<br />

PNIMP-PROCEDURE-EX.<br />

45


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

46<br />

EXIT.<br />

PNIMP-READ-FIRST.<br />

MOVE KT-KEY-CHAR (1) (1 : 19) TO PNIPT-PNWUI<br />

IF FLAG-HOLD = 'Y' THEN<br />

EXEC SQL SELECT * INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

ROWNUM = 1<br />

AND<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

ORDER BY<br />

PNPST_PNPSTPN,<br />

CURR_KEY<br />

FOR UPDATE<br />

END-EXEC<br />

ELSE<br />

EXEC SQL SELECT * INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

ROWNUM = 1<br />

AND<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

ORDER BY<br />

PNPST_PNPSTPN,<br />

CURR_KEY<br />

END-EXEC<br />

END-IF.<br />

IF SQLCODE = 0 THEN<br />

MOVE PNIPT-CURR-KEY TO KT-KEY-NUM (2)<br />

MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO<br />

KT-KEY-CHAR (2) (1 : 35)<br />

MOVE PNIPT-PNWUI TO KT-KEY-CHAR (1) (1 : 19)<br />

move PNIPT-PNITM-PNITMPN TO KT-KEY-CHAR (1) (100: 19)<br />

MOVE 'Y' TO DL1-LAST-KEY-DEF<br />

MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1)<br />

MOVE PNIPT-PNWUI (1 : 19) TO DL1-IO-AREA (1 : 19)<br />

MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO DL1-IO-AREA (20 : 35)<br />

ELSE<br />

MOVE 'N' TO DL1-LAST-KEY-DEF<br />

MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1)<br />

END-IF.<br />

MOVE SQLCODE TO RETCODE.<br />

IF SQLCODE = 0 THEN<br />

PERFORM READ-PNPST-PNIPT THRU<br />

READ-PNPST-PNIPT-EX<br />

END-IF.<br />

PNIMP-READ-FIRST-EX.<br />

EXIT.<br />

READ-PNPST-PNIPT.<br />

MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO PNPST-PNPSTPN (1 : 35)<br />

MOVE PNIPT-PNITM-PNITMPN (1 : 19) TO<br />

PNPST-PNITM-PNITMPN (1 : 19)<br />

IF FLAG-HOLD = 'Y' THEN


Appendixes<br />

EXEC SQL SELECT<br />

DBD_PNPSTPN,<br />

SSTRVAL2,<br />

SSTRQTA,<br />

SSTRCALT,<br />

SSTRKIT,<br />

SSTRTC,<br />

SSTRPROV,<br />

PNITM_PNITMPN<br />

INTO :PNPST FROM HTDBDDA_PNPST<br />

WHERE<br />

DBD_PNPSTPN = :PNPST-PNPSTPN<br />

AND<br />

PNITM_PNITMPN = :PNPST-PNITM-PNITMPN<br />

FOR UPDATE<br />

END-EXEC<br />

ELSE<br />

EXEC SQL SELECT<br />

DBD_PNPSTPN,<br />

SSTRVAL2,<br />

SSTRQTA,<br />

SSTRCALT,<br />

SSTRKIT,<br />

SSTRTC,<br />

SSTRPROV,<br />

PNITM_PNITMPN<br />

INTO :PNPST FROM HTDBDDA_PNPST<br />

WHERE<br />

DBD_PNPSTPN = :PNPST-PNPSTPN<br />

AND<br />

PNITM_PNITMPN = :PNPST-PNITM-PNITMPN<br />

END-EXEC<br />

END-IF.<br />

IF SQLCODE = 0 THEN<br />

MOVE 'Y' TO DL1-LAST-KEY-DEF<br />

MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1)<br />

MOVE PNPST-PNPSTPN TO KT-KEY-CHAR (2)(1 : 35)<br />

MOVE 54 TO DL1-FB-KEY-LEN<br />

MOVE PNPST-PNPSTPN TO DL1-FB-KEY-ARR(20:35)<br />

MOVE PNPST-PNITM-PNITMPN TO DL1-FB-KEY-ARR(1:19)<br />

MOVE PNPST(1 : 50) TO DL1-IO-AREA (55 : 50)<br />

ELSE<br />

MOVE 'N' TO DL1-LAST-KEY-DEF<br />

MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1)<br />

END-IF.<br />

MOVE SQLCODE TO RETCODE.<br />

READ-PNPST-PNIPT-EX.<br />

EXIT.<br />

PNIMP-READ-NEXT.<br />

MOVE KT-KEY-CHAR (1) (1 : 19) TO PNIPT-PNWUI<br />

MOVE KT-KEY-CHAR (2) (1 : 35) TO PNIPT-PNPST-PNPSTPN<br />

47


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

48<br />

MOVE KT-KEY-NUM (2) TO PNIPT-CURR-KEY<br />

MOVE PNIPT-PNPST-PNPSTPN TO HT-APPO (20 : 35).<br />

MOVE KT-KEY-CHAR (1) (100: 19) TO HT-APPO (1 : 19).<br />

IF FLAG-HOLD = 'Y' THEN<br />

EXEC SQL SELECT *<br />

/*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */<br />

INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

and<br />

PNPST_PNPSTPN = :PNIPT-PNPST-PNPSTPN<br />

and<br />

CURR_KEY > :PNIPT-CURR-KEY<br />

AND<br />

ROWNUM = 1<br />

ORDER BY<br />

DBD_PNWUI,<br />

PNPST_PNPSTPN,<br />

CURR_KEY<br />

FOR UPDATE<br />

END-EXEC<br />

if SQLCODE = +1403 then<br />

EXEC SQL SELECT *<br />

/*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */<br />

INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

and<br />

PNPST_PNPSTPN > :PNIPT-PNPST-PNPSTPN<br />

AND<br />

ROWNUM = 1<br />

ORDER BY<br />

DBD_PNWUI,<br />

PNPST_PNPSTPN<br />

FOR UPDATE<br />

END-EXEC<br />

end-if<br />

ELSE<br />

EXEC SQL SELECT *<br />

/*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */<br />

INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

and<br />

PNPST_PNPSTPN = :PNIPT-PNPST-PNPSTPN<br />

and<br />

CURR_KEY > :PNIPT-CURR-KEY<br />

AND<br />

ROWNUM = 1<br />

ORDER BY<br />

DBD_PNWUI,


Appendixes<br />

PNPST_PNPSTPN,<br />

CURR_KEY<br />

END-EXEC<br />

if SQLCODE = +1403 then<br />

EXEC SQL SELECT *<br />

/*+ INDEX (HTDBDDA_PNIPT HTDBDDA_PNIPT_PNWUI) */<br />

INTO :PNIPT FROM HTDBDDA_PNIPT<br />

WHERE<br />

DBD_PNWUI = :PNIPT-PNWUI<br />

and<br />

PNPST_PNPSTPN > :PNIPT-PNPST-PNPSTPN<br />

AND<br />

ROWNUM = 1<br />

ORDER BY<br />

DBD_PNWUI,<br />

PNPST_PNPSTPN<br />

END-EXEC<br />

end-if<br />

END-IF.<br />

IF SQLCODE = 0 THEN<br />

MOVE PNIPT-CURR-KEY TO KT-KEY-NUM (2)<br />

MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO<br />

KT-KEY-CHAR (2) (1 : 35)<br />

MOVE PNIPT-PNWUI TO KT-KEY-CHAR (1) (1 : 19)<br />

move PNIPT-PNITM-PNITMPN TO KT-KEY-CHAR (1) (100: 19)<br />

MOVE 'Y' TO DL1-LAST-KEY-DEF<br />

MOVE 'Y' TO KT-KEY-CHAR (1) (200 : 1)<br />

MOVE PNIPT-PNWUI (1 : 19) TO DL1-IO-AREA (1 : 19)<br />

MOVE PNIPT-PNPST-PNPSTPN (1 : 35) TO DL1-IO-AREA (20 : 35)<br />

ELSE<br />

MOVE 'N' TO DL1-LAST-KEY-DEF<br />

MOVE 'N' TO KT-KEY-CHAR (1) (200 : 1)<br />

END-IF.<br />

MOVE SQLCODE TO RETCODE.<br />

IF SQLCODE = 0 THEN<br />

PERFORM READ-PNPST-PNIPT THRU<br />

READ-PNPST-PNIPT-EX<br />

END-IF.<br />

PNIMP-READ-NEXT-EX.<br />

EXIT.<br />

PNIMP-READ-COND.<br />

if KT-KEY-CHAR (1) (200 : 1) = 'N' then<br />

perform PNIMP-READ-FIRST thru PNIMP-READ-FIRST-EX<br />

else<br />

perform PNIMP-READ-NEXT thru PNIMP-READ-NEXT-EX<br />

end-if.<br />

PNIMP-READ-COND-EX.<br />

EXIT.<br />

PNIMP-ISRT.<br />

MOVE 1000102 TO RETCODE.<br />

PNIMP-ISRT-EX.<br />

49


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

EXIT.<br />

PNIMP-DLET.<br />

MOVE 1000102 TO RETCODE.<br />

PNIMP-DLET-EX.<br />

EXIT.<br />

PNIMP-REPL.<br />

MOVE 1000102 TO RETCODE.<br />

PNIMP-REPL-EX.<br />

EXIT.<br />

Now execute following ksh script:<br />

#!/usr/bin/ksh<br />

#<br />

for i in HTDBDDA HTDBDEX HTDBDIM<br />

do<br />

cd $HOME/h2r/src/dbd<br />

h2r_dbd $i.dbd<br />

..sqlplus $<strong>H2R</strong>SQLID @pared.sql<br />

cd $HOME/h2r/dbd<br />

sqlplus $<strong>H2R</strong>SQLID @$<strong>H2R</strong>HOME/sql/create_dbd_list.sql<br />

cd $HOME/import/diction<br />

cpy2xml -o $i.xml $i<br />

xmlconverter -db -o $i.sql $i.xml<br />

sqlplus $<strong>H2R</strong>SQLID @$i.sql<br />

done<br />

xmlconverter -p -n -h2r -o HTDBDDA.cbl HTDBDDA.xml<br />

cob -u HTDBDDA.cbl<br />

h2r_generate_dbd HTDBDDA<br />

h2r_prog_dbd HTDBDEX<br />

h2r_prog_dbd HTDBDIM<br />

This script creates all the <strong>H2R</strong> environment for the physical dbd HTDBDDA and both<br />

logical HTDBDEM and HTDBDEX. The implosion logical resolution rules are s<strong>it</strong>uated in<br />

manually created copies: HTDBDIM_PAIRED_PNIMP.cpy and<br />

HTDBDIM_PAIRED_PNWUI.cpy<br />

Other manually created copies are working and do not contained important information.<br />

Notice that the physical DBD HTDBDDA has the following structure:<br />

-- +-----------+<br />

-- | PNITM |<br />

-- +-----------+<br />

-- |<br />

-- ---------------<br />

-- | |<br />

-- +..............++-----------+<br />

50


Appendixes<br />

-- . PNIVP . | PNPST |<br />

-- +..............++-----------+<br />

-- |<br />

-- |<br />

-- |<br />

-- +***********+<br />

-- * PNIPT *<br />

-- +***********+<br />

Implosion<br />

-- +.............+<br />

-- . PNITM .<br />

-- +.............+<br />

-- |<br />

-- |<br />

-- |<br />

-- +..............+<br />

-- . PNIMP .<br />

-- +..............+<br />

-- |<br />

-- |<br />

-- |<br />

-- +..............+<br />

-- . PNWUI .<br />

-- +..............+<br />

There are two manually created copies that define the logical resolution in this case:<br />

The copy HTDBDIM_PAIRED_PNWUI.cpy defines the virtual segment’s PNWUI reading<br />

mode by the virtual access field. It defines only PNWUI-READ-FIRST routin while only one<br />

virtual segment can be defined for a given implosion key.<br />

The copy HTDBDIM_PAIRED_PNIMP.cpy has the paragraph READ-PNPST-PNIPT that<br />

defines a reading mode for the segment PNIMP. This segment logically is a union of a<br />

PNITM key and a PNPST physical segment. The program retrives PMIPT virtual segment<br />

after PNWUI segment been read.<br />

Both copies use a free space in a KT-KEY-CHAR (1) field to store a cr<strong>it</strong>ical information<br />

between calls.<br />

KT-KEY-CHAR (1) (100 : 19) is used by HTDBDIM_PAIRED_PNWUI.cpy to pass the<br />

virtual key of PNWUI segment while KT-KEY-CHAR (1) (200 : 1) is used by<br />

HTDBDIM_PAIRED_PNIMP.cpy to decide if <strong>it</strong> is a first o a following retrieving call.<br />

Explosion<br />

-- +..............+<br />

-- . PNITM .<br />

-- +..............+<br />

51


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

-- |<br />

-- |<br />

-- |<br />

-- +...............+<br />

-- . PNPST .<br />

-- +...............+<br />

-- |<br />

-- |<br />

-- |<br />

-- +...............+<br />

-- . PNEXP .<br />

-- +...............+<br />

The explosion logical resolution is automatically created. The procedure READ-PNITM-<br />

PNIPT is called after PNEXP-READ to obtain the data pointed by PNEXP segment.<br />

52


FAQ<br />

How to deal w<strong>it</strong>h a new PSB?<br />

If you have added a new PSB or changed an old one, you should execute<br />

the following actions to proceed <strong>it</strong>:<br />

Put a PSB source into a $HOME/h2r/src/psb directory, renaming <strong>it</strong> “.psb”<br />

Execute a following command:<br />

h2r_psb <br />

See also Internal h2r tables loading, DBD and PSB analyzing paragraph.<br />

If the PSB is a new one, you should add <strong>it</strong> also in a psblist file under<br />

$HOME/etc directory, if not present.<br />

Now shut down your CICS application if active, and execute the following script<br />

reload_psb.sh<br />

that will cancel the shared memory containing the old PSB configuration and so the new<br />

CICS session will execute a cold start recreating the shared memory for new PSB<br />

configuration.<br />

53


<strong>H2R</strong> <strong>User's</strong> <strong>Guide</strong><br />

How to deal w<strong>it</strong>h a new physical DBD?<br />

If you have added a new DBD or changed an old one, you should execute<br />

the following actions to proceed <strong>it</strong>:<br />

Put a DBD source into a $HOME/h2r/src/dbd directory, renaming them as<br />

“.dbd”<br />

Execute the following commands:<br />

h2r_dbd <br />

h2r_create_dbd_list<br />

See also Internal h2r tables loading, DBD and PSB analyzing paragraph.<br />

Create or modify appropriately a corresponding COPY .cpy in<br />

$HOME/h2r/import/diction directory and proceed <strong>it</strong> as usually (see Internal h2r tables<br />

loading, Copies analyzing paragraph).<br />

Execute the following command:<br />

h2r_generate_dbd <br />

If a DBD is a new one, the relative data should be unloaded, converted and<br />

loaded, taking in consideration the multirecord management if present (see Conversion<br />

rules defin<strong>it</strong>ion). Create the relative PSB and proceed <strong>it</strong> as usually (see How to deal<br />

w<strong>it</strong>h a new PSB? paragraph above).<br />

If you have just changed your old DBD, the necess<strong>it</strong>y of reloading the data depends on the<br />

changes, in some cases <strong>it</strong> is enough to change the configuration RDBS tables w<strong>it</strong>hout<br />

reloading the data (for example, an empty field added to the end of the table<br />

corresponding to one of the segments).<br />

54


FAQ<br />

How to deal w<strong>it</strong>h a new search field, a<br />

new segment, a new secondary index?<br />

If you have added a new segment, a new search field or a new secondary index for one of<br />

your, the whole DBD should be upgraded (see How to deal w<strong>it</strong>h a new physical DBD?<br />

paragraph above)<br />

55

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!