23.03.2013 Views

=== WORKSHOP ORACLE : PERFORM A VLDB MIGRATION WITH ...

=== WORKSHOP ORACLE : PERFORM A VLDB MIGRATION WITH ...

=== WORKSHOP ORACLE : PERFORM A VLDB MIGRATION WITH ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Introduction<br />

<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

In this workshop it will be explained how to use datapump to migrate a database PROD with datapump from a AIX<br />

platform to a Linux platform. The size of the source database is about 6 TB. About 50 % are data segments and the<br />

rest are mainly index segments. The source database PROD is a standalone database and the target database<br />

PRODNEW is a cluster database with 4 instances running on exadata.<br />

The datapump migration is run as fast as possible, a window for the migration has been set to allow a downtime for<br />

36 hours. In a non-<strong>VLDB</strong> migration, the best approach would simply be a full export/import as that approach will not<br />

require a lot of DBA attention. However, when speed is an issue that the full export/import scenario has 2<br />

drawbacks:<br />

1. There is only one worker process to creates all the indexes. This can still be done in parallel as the worker uses PX<br />

processes up to the PARALLEL value. But nevertheless, since there is only 1 worker process, the parallel process is<br />

limited to 1 index at the time.<br />

2. Constraints will be validated during the import operation by default. It would however, be much faster, to load<br />

the constraints with the NOVALIDATE clause.<br />

An analysis of the source database (dba_segments) shows that there is only 1 very large schema (PROD_OWNER).<br />

The other schema’s are quite small compared to that one large schema. So the approach will be to fully import the<br />

database with an exclude of the large schema and deal with that schema differently to speed up the import<br />

operation and workaround the 2 performance issues as shown above. So the plan for the operation will be:<br />

• Phase 1: Perform Export operation<br />

o Prepare source for export<br />

o Perform export in parallel<br />

• Phase 2: Perform import operation<br />

o Prepare target for import<br />

o Perform 2 imports in parallel:<br />

Full import operation, exclude PROD_OWNER<br />

Import PROD_OWNER, exclude indexes and constraints<br />

• Phase 3: Build indexes and constraints in parallel.<br />

• Phase 4: Perform post migration activities<br />

o Perform import checks<br />

o Restore old values source and target<br />

1


Phase 1: Export phase<br />

1A Prepare source for export<br />

<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

Before starting the export, make sure init.ora setting max_parallel_servers is set. In this example the setting was 0,<br />

so during the migration the setting is increased. Therefore the following script was run in source database PROD:<br />

spool source_change_parameters_before.lis<br />

PROMPT<br />

PROMPT script source_change_parameters_before.sql<br />

PROMPT Script to change parameters on source database prior to migration<br />

PROMPT Don’t forget to reset after the migration<br />

PROMPT<br />

PROMPT<br />

PROMPT<br />

PROMPT Hit to continue<br />

PAUSE<br />

PROMPT Changing parameter max_parallel_servers from 0 to 64<br />

ALTER SYSTEM SET max_parallel_servers = 64 SCOPE = BOTH;<br />

Although not strictly necessary since we make a consistent export, it is advisable to run the export while the<br />

database is in restricted mode. This can be done as follows:<br />

SQL> alter system enable restricted session;<br />

Because the restricted session does not log off existing sessions in the database, you should either stop/start the<br />

database to get rid of existing sessions, or you should kill the existing sessions. Use the query below to detect<br />

running sessions and kill those sessions with the generated kill statements:<br />

SQL> select 'ALTER SYSTEM KILL SESSION ''' ||sid||','||serial# ||''';', username,<br />

program, status from v$session@DP_LINK01 order by username;<br />

Now we create an additional user to be part of the export/import operation. This information will be used to check<br />

afterwards if the export/import operation was successful.<br />

REM create user mig_temp<br />

create user mig_temp identified by mig_temp default tablespace users;<br />

grant dba to mig_temp;<br />

REM create some tables with usefull information<br />

create table mig_temp.mig_objects as select * from dba_objects;<br />

create table mig_temp.mig_tables as select * from dba_tables;<br />

create table mig_temp.mig_indexes as select * from dba_indexes;<br />

create table mig_temp.mig_segments as select * from dba_segments;<br />

create table mig_temp.mig_constraints as select constraint_name<br />

from dba_constraints where owner = 'PROD_OWNER';<br />

REM export the statistics just in case.<br />

exec dbms_stats.create_stat_table(ownname => 'MIG_TEMP', stattab =><br />

'MIG_STATS', tblspace => 'USERS');<br />

exec dbms_stats.EXPORT_DATABASE_STATS('MIG_STATS','<strong>MIGRATION</strong>','MIG_TEMP');<br />

2


1B Starting the export<br />

$ expdp parfile=exp_full.par<br />

<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

Contents of the parfile:<br />

PARALLEL=12<br />

FLASHBACK_TIME="SYSTIMESTAMP"<br />

DIRECTORY=MIG_DUMP<br />

DUMPFILE=PROD_%U.dmp<br />

LOGFILE=exp_full.log<br />

METRICS=Y<br />

FILESIZE=48G<br />

FULL=Y<br />

ACCESS_METHOD=EXTERNAL_TABLE<br />

Some remarks regarding the export parameter files:<br />

• ACCES_METHOD should not be set by default. However, in this case, the parameter setting led to a much<br />

faster export time.<br />

• In this approach, a dump is made to a NFS file system. Alternatively, an import can be made straight from<br />

the source database into the target database. In that scenario (which is not further described in this article) a<br />

database link should be created and added as NETWORK_LINK parameter to the parameter file. When<br />

importing over database links take note that the database link transfer rate is limited to about 30 MB/sec. So<br />

if you have a higher bandwith you should split the import operation in several sessions at the schema or<br />

table level and use a number of database links to get a higher transfer rate.<br />

• The FILESIZE parameter ensures that a large number of dumpfiles are generated, which allows the import<br />

operation to be performed with a much higher degree of parallelism.<br />

• The parameter FLASHBACK_TIME ensures that a consistent export is made.<br />

The export ran for 4 hours and the total dumpfile size was 2,8 TB. In supplement 1 3 SQL Queries are provided to<br />

monitor the progress of the datapump export. Also you can regularly issue the statements below in the dumpfile<br />

location to monitor the growth of the dumpfiles:<br />

[dumpfile location]$ du<br />

3048166105 .<br />

[dumpfile location]$ du -h<br />

2.9T .<br />

3


Phase 2: Import phase<br />

2A Prepare target for import<br />

<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

While the export is running you can start some actions to prepare the import operation.<br />

Prep-target-01: Configure a NFS Mount on the Target Database Server<br />

The exadata database machine is connected with a Sun ZFS Backup appliance for rman backup’s. This system is also<br />

used to temporarily host the dumpfile. The connection between the Sun ZFS appliance and the exadata database<br />

machine is a high speed infiniband connection (40 Gb/sec). A 10 TB NFS mount is configured to access the dumpfile.<br />

Prep-target-02: Create a directory in target database PRODNEW<br />

A directory is created in the target database to refer to the NFS mount as configured above.<br />

SQL> create directory mig_db as '/mnt/export/PRODNEW';<br />

Prep-target-03: Set database parameters on the target<br />

In order to speed up the migration some database parameters need to be set in the target database. They will be<br />

reset after the datapump operation has completed:<br />

SQL> ALTER SYSTEM SET parallel_servers_target = 32 SCOPE = BOTH;<br />

SQL> ALTER SYSTEM SET db_block_checksum = FALSE SCOPE = BOTH;<br />

• The default value for parallel_servers_target on this platform is 8 ( as it is a consolidation platform), which is<br />

too low for the datapump operation. Make sure to also check parallel_max_servers is sufficiently high. We<br />

will be running the operation in parallel with a degree of 24.<br />

• DB_BLOCK_CHECKSUM will be changed from TYPICAL to FALSE to speed up the migration.<br />

• Make sure PGA_AGGREGATE_TARGET is sufficiently large as well. Especially during the create index part a<br />

large PGA will speed up the index creation process. In database PRODNEW the PGA_AGGREGATE_TARGET =<br />

8 GB, which is sufficient.<br />

Prep-target-04: Configure the target database for the migration<br />

During the migration we do not want archiving or logging to slow down the operation, so perform the following:<br />

SQL> alter database disable block change tracking;<br />

SQL> alter database no force_logging;<br />

SQL> alter database flashback off;<br />

$ srvctl stop database –d PRODNEW –o immediate;<br />

SQL> startup mount;<br />

SQL> alter database noarchivelog;<br />

SQL> shutdown immediate;<br />

$ srvctl start database –d PRODNEW<br />

Prep-target-05: Startup the database in restricted session<br />

In order to prevent users from accessing the database during the migration the database should be in restricted<br />

session mode. The best thing to do is to shutdown the database and startup the database in restricted mode.<br />

4


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

Alternatively, you can enable the restricted session without restarting the database. But in that case all existing<br />

database sessions will remain in the database so you need to kill those user sessions before you proceed:<br />

SQL> alter system enable restricted session;<br />

SQL> select 'ALTER SYSTEM KILL SESSION ''' ||sid||','||serial# ||''';', username,<br />

program, status from v$session@DP_LINK01 order by username;<br />

Use the output of the statement above to kill the user sessions.<br />

An added benefit of restricted session is that scheduled oracle jobs such as for example AWR snapshots will be<br />

prevented from running.<br />

2B Perform 2 import operations in parallel:<br />

$ impdp system parfile=dp_imp_full.par<br />

-- contents parameter file<br />

DIRECTORY=rob_temp<br />

DUMPFILE=full_PROD_%U.dmp<br />

LOGFILE=imp_full.log<br />

PARALLEL=24<br />

FULL=Y<br />

METRICS=Y<br />

ACCESS_METHOD=EXTERNAL_TABLE<br />

EXCLUDE=schema:"('SYSTEM','TSMSYS','OLAPSYS','OUTLN','ANONYMOUS','APPQOSSYS','D<br />

BSNMP','DIP','<strong>ORACLE</strong>_OCM','MDDATA','LBACSYS','WMSYS','XDB',<br />

'PROD_OWNER')",tablespace:"='UNDO'"<br />

$ impdp system parfile=dp_imp_prod_owner.par<br />

-- contents parameter file<br />

DIRECTORY=rob_temp<br />

DUMPFILE=full_PROD_%U.dmp<br />

LOGFILE=imp_prod_owner.log<br />

PARALLEL=24<br />

METRICS=Y<br />

SCHEMAS=PROD_OWNER<br />

EXCLUDE=INDEX,CONSTRAINT<br />

ACCESS_METHOD=EXTERNAL_TABLE<br />

Afterwards, check the logfiles for any serious errors<br />

5


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

Phase 3: Building Indexes and Constraints<br />

3A Generate the create scripts<br />

Generate the required create index and add constraints scripts in the source database:<br />

SQL>@source_generate_indexes.sql<br />

SQL>@source_generate_scripts.sql<br />

You can find the scripts mentioned in Supplement 2 at the end of this workshop.<br />

Generate the required constraints scripts:<br />

$ expdp system parfile=dp_extract_constraints.par<br />

--Script generates create statements and uses datapump export as source<br />

--Contents parfile:<br />

DIRECTORY=rob_temp<br />

DUMPFILE=full_PROD_%U.dmp<br />

LOGFILE=extract_constraints.log<br />

PARALLEL=24<br />

METRICS=Y<br />

SQLFILE=target_create_constraints.sql<br />

SCHEMAS=PROD_OWNER<br />

INCLUDE=CONSTRAINT<br />

After the script is generated, edit it with the vi-editor. We do not want the constraints to be added with the<br />

VALIDATE clause as that will take a very long time to complete:<br />

: 1,$ s/ENABLE;/ENABLE NOVALIDATE;/g<br />

3B Create the indexes for user PROD_OWNER<br />

Start 4 separate linux sessions to the 4 separate cluster nodes and run the scripts. It is best to use SCREEN so the<br />

scripts will continue to run in the background:<br />

Node exa1db01: SQL> @target_create_index01.sql<br />

Node exa1db02: SQL> @target_create_index02.sql<br />

Node exa1db03: SQL> @target_create_index03.sql<br />

Node exa1db04: SQL> @target_create_index04.sql<br />

After the indexes are created, it needs to be checked if they are all created:<br />

set linesize 200<br />

set pagesize 100<br />

spool target_check_indexes.lis<br />

PROMPT This script checks the number of indexes and reports any missing indexes<br />

PROMPT press to continue<br />

PAUSE<br />

PROMPT number of indexes on source and target database:<br />

set heading off<br />

select 'indexes prod_owner in target database: '||count(index_name) from<br />

dba_indexes where owner = 'PROD_OWNER';<br />

6


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

select 'indexes prod_owner in source database: '||count(index_name) from<br />

mig_temp.mig_indexes where owner = 'PROD_OWNER';<br />

PROMPT detail info missing indexes<br />

PROMPT note that you can ignore the LOB indexes, they are automatically created<br />

PROMPT with different names.<br />

set heading on<br />

select table_name, index_name, index_type from mig_temp.mig_indexes where owner<br />

= 'PROD_OWNER' and index_name not in (select index_name from dba_indexes where<br />

owner = 'PROD_OWNER') order by index_name;<br />

PROMPT<br />

PROMPT **** End of Script (=: ****<br />

spool off<br />

Finally run the script to reset the indexes, back to the original settings. (The script below was generated during the<br />

export preparation step):<br />

SQL> @target_reset_indexes.sql<br />

3B part 2: Run the script to add the constraints<br />

Now the constraints need to be added to the tables. Note that the INLINE NOT NULL constraints are already in place<br />

as they were part of the data import of OZG_OWNER. Now all the other constraints will be added. Run the script<br />

target_create_constraints.sql that was generated earlier on:<br />

SQL> @target_create_constraints.sql<br />

After the script has run, you need to run it again!! The reason we run it twice: FK constraints can only be created<br />

once a PMK is in place. And this is not always the case in the first run.<br />

SQL> @target_create_constraints.sql<br />

Afterwards, check if the number of constraints match with the source database:<br />

SQL> select constraint_name from mig_temp.mig_constraints where<br />

constraint_name not in (select constraint_name from dba_constraints where owner<br />

= 'PROD_OWNER');<br />

7


Phase 4: Post migration steps<br />

<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

4A Run the script to create directories<br />

The directory settings were changed on the target. During the preparation phase scripts were generated to create<br />

the directories. Run those scripts here:<br />

SQL> @target_create_directories.sql<br />

$ chmod +x target_create_directories.sh<br />

$ ./target_create_directories.sh<br />

4B Execute the sys grants<br />

Grants done by user SYS to other users are not part of the export dumpfile. So they will need to be created on the<br />

source manually and then execute on the target.<br />

Run on the source database (see supplement C for the script):<br />

SQL> @source_generate_sys_grants.sql<br />

Run on the target database:<br />

SQL> @target_sys_grants.sql<br />

4C Recompile invalid objects<br />

SQL> @?/rdbms/admin/utlrp<br />

4D Check the objects on the target and compare with the source<br />

PROMPT number of objects in source database<br />

SQL> select owner, object_type, count(object_name) from mig_temp.mig_objects<br />

group by owner, object_type order by owner, object_type<br />

PROMPT number of objects in target database<br />

SQL> select owner, object_type, count(object_name) from dba_objects group by<br />

owner, object_type order by owner, object_type<br />

PROMPT invalid objects in target which were valid in source database<br />

SQL> select owner, object_name from dba_objects where status 'VALID ' and<br />

object_name not in (select object_name from mig_temp.mig_objects where status<br />

'VALID ')<br />

Fix any issues you encounter<br />

4E Restore old values on source and target database<br />

Reverse the actions you executed during the preparation phase on both source and target in behalf of the migration.<br />

4E Gather new statistics<br />

The best thing is to gather new statistics. Proceed as follows:<br />

alter session force parallel DML;<br />

EXEC DBMS_STATS.gather_database_stats (estimate_percent => 15, degree 64);<br />

8


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

This procedure can run for a long time. (6 hours in the database we used). If you do not want to wait, you can also<br />

import the statistics from the dumpfile, hand over the database for production and start the gathering of statistics in<br />

the background. Import the source db statistics as follows (remember that they were part of the export preparation<br />

phase earlier on):<br />

SQL> exec dbms_stats.IMPORT_DATABASE_STATS('MIG_STATS','<strong>MIGRATION</strong>','MIG_TEMP');<br />

<strong>===</strong>= End of migration workshop (=: <strong>===</strong>=<br />

9


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

SUPPLEMENT 1: QUERIES TO MONITOR DATAPUMP IMPORT AND EXPORT OPERATION<br />

PROMPT Query 1: View datapump worker processes<br />

set lines 150 pages 100 numwidth 7<br />

col program for a38<br />

col username for a10<br />

col spid for a7<br />

select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,<br />

s.status, s.username, d.job_name, p.spid, s.serial#, p.pid<br />

from v$session s, v$process p, dba_datapump_sessions d<br />

where p.addr=s.paddr and s.saddr=d.saddr;<br />

PROMPT Query 2: View long running operations:<br />

set lines 120<br />

col opname for a25 trunc<br />

col username for a15 trunc<br />

col target for a20<br />

col sid for 999999<br />

col serial# for 999999<br />

col %DONE for a8<br />

select b.sid, b.username,a.sid,b.opname,b.target,round(b.SOFAR*100 / b.TOTALWORK,0) ||<br />

'%' as "%DONE",<br />

b.TIME_REMAINING,to_char(b.start_time,'YYYY/MM/DD HH24:MI:SS') START_TIME<br />

from V$SESSION_LONGOPS b,V$SESSION a where a.sid=b.sid and TIME_REMAINING > 0 order by<br />

b.start_time;<br />

Query 3: View wait operations<br />

Tip: start the datapump export under a certain user and you can monitor this user’s<br />

wait. In the example below user SYS is applied:<br />

PROMPT Query 3: User Waits<br />

SELECT NVL(s.username, '(oracle)') AS username,<br />

s.sid,<br />

s.serial#,<br />

sw.event,<br />

sw.wait_time,<br />

sw.seconds_in_wait,<br />

sw.state<br />

FROM v$session_wait sw, v$session s<br />

WHERE s.sid = sw.sid and s.username = 'SYS'<br />

ORDER BY sw.seconds_in_wait DESC;<br />

10


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

SUPPLEMENT 2: GENERATE IMPORT CREATE SCRIPTS<br />

Supplement 2 - Script source_generate_indexes.sql<br />

PROMPT<br />

PROMPT script target_generate_indexes.sql<br />

PROMPT Script to generate 4 PROD_OWNER index creation scripts<br />

PROMPT . . . target_create_index_01.sql .. target_create_index_08.sql<br />

PROMPT<br />

PROMPT = all indexes are evenly distributed across the 4 files<br />

PROMPT = all indexes related to 1 table are grouped together<br />

PROMPT = top 24 largest tables are evenly distributed across the 4 files<br />

PROMPT<br />

PROMPT Hit to continue<br />

PAUSE<br />

REM Step 1: Create a temporary table:<br />

create table rob_temp (file_no number(4), table_name varchar2(30));<br />

insert into rob_temp(file_no, table_name) select 0,table_name from dba_tables where<br />

owner = 'PROD_OWNER';<br />

REM Step 2: Assign 4 numbers to table_names in temporary table<br />

DECLARE<br />

teller NUMBER(4);<br />

CURSOR cur_tabel IS SELECT * FROM rob_temp for update of file_no;<br />

BEGIN<br />

teller := 1;<br />

FOR i IN cur_tabel<br />

LOOP<br />

if teller >= 5 then<br />

teller := 1;<br />

end if;<br />

UPDATE rob_temp SET file_no = teller where current of cur_tabel;<br />

teller := teller + 1;<br />

END LOOP;<br />

END;<br />

/<br />

REM Step 3: Make sure top 24 largest tables are distributed evenly across the different<br />

files<br />

update rob_temp set file_no = 1 where table_name in<br />

('LARGE1','LARGE2','LARGE3','LARGE4','LARGE5','LARGE6');<br />

update rob_temp set file_no = 2 where table_name in<br />

('LARGE7','LARGE8','LARGE9','LARGE10','LARGE11','LARGE12');<br />

update rob_temp set file_no = 3 where table_name in<br />

('LARGE13','LARGE14','LARGE15','LARGE16','LARGE17','LARGE18');<br />

update rob_temp set file_no = 4 where table_name in<br />

('LARGE19','LARGE20','LARGE21','LARGE22','LARGE23','LARGE24');<br />

REM Add any changes based on runtime experience:<br />

REM fill this after some testruns to evenly distribute the indexes.<br />

11


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

REM Step 4: spool the index script definitions to 4 seperate files:<br />

set heading off;<br />

set echo off;<br />

Set pages 999;<br />

set long 90000 pages 0 lines 131;<br />

column txt format a121 word_wrapped<br />

set linesize 1000;<br />

set feedback off<br />

set linesize 1000<br />

set trimspool on<br />

set verify off<br />

set embedded on<br />

spool target_create_index_01.sql<br />

select 'spool target_create_index_01.lis' from dual;<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') starttijd_01 from dual;'<br />

from dual;<br />

select 'set timing on' from dual;<br />

select 'set echo on' from dual;<br />

select 'set feedback on' from dual;<br />

select 'alter system set parallel_force_local=true;' from dual;<br />

select 'alter session set nls_sort=BINARY;' from dual;<br />

prompt<br />

select dbms_metadata.get_ddl('INDEX',index_name,'PROD_OWNER')||' nologging parallel<br />

24;' txt<br />

from dba_indexes where table_name in (select table_name from rob_temp where file_no =<br />

1);<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') eindtijd_01 from dual;' from<br />

dual;<br />

select 'spool off' from dual;<br />

spool target_create_index_02.sql<br />

select 'spool target_create_index_02.lis' from dual;<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') starttijd_01 from dual;'<br />

from dual;<br />

select 'set timing on' from dual;<br />

select 'set echo on' from dual;<br />

select 'set feedback on' from dual;<br />

select 'alter system set parallel_force_local=true;' from dual;<br />

select 'alter session set nls_sort=BINARY;' from dual;<br />

prompt<br />

select dbms_metadata.get_ddl('INDEX',index_name,'PROD_OWNER')||' nologging parallel<br />

24;' txt<br />

from dba_indexes where table_name in (select table_name from rob_temp where file_no =<br />

2);<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') eindtijd_02 from dual;' from<br />

dual;<br />

select 'spool off' from dual;<br />

spool target_create_index_03.sql<br />

select 'spool target_create_index_03.lis' from dual;<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') starttijd_03 from dual;'<br />

from dual;<br />

select 'set timing on' from dual;<br />

select 'set echo on' from dual;<br />

12


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

select 'set feedback on' from dual;<br />

select 'alter system set parallel_force_local=true;' from dual;<br />

select 'alter session set nls_sort=BINARY;' from dual;<br />

prompt<br />

select dbms_metadata.get_ddl('INDEX',index_name,'PROD_OWNER')||' nologging parallel<br />

24;' txt<br />

from dba_indexes where table_name in (select table_name from rob_temp where file_no =<br />

3);<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') eindtijd_03 from dual;' from<br />

dual;<br />

select 'spool off' from dual;<br />

spool target_create_index_04.sql<br />

select 'spool target_create_index_04.lis' from dual;<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') starttijd_04 from dual;'<br />

from dual;<br />

select 'set timing on' from dual;<br />

select 'set echo on' from dual;<br />

select 'set feedback on' from dual;<br />

select 'alter system set parallel_force_local=true;' from dual;<br />

select 'alter session set nls_sort=BINARY;' from dual;<br />

prompt<br />

select dbms_metadata.get_ddl('INDEX',index_name,'PROD_OWNER')||' nologging parallel<br />

24;' txt<br />

from dba_indexes where table_name in (select table_name from rob_temp where file_no =<br />

4);<br />

select 'select to_char(sysdate,''DD-MON-YYYY HH24-MI-SS'') eindtijd_04 from dual;' from<br />

dual;<br />

select 'spool off' from dual;<br />

drop table rob_temp;<br />

prompt **** End of Script (=: ****<br />

spool off<br />

13


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

Supplement 2 -Script source_scripts.sql<br />

PROMPT script source_gen_scripts.sql<br />

PROMPT Script to generate several scripts for later steps<br />

PROMPT<br />

PROMPT This script will spool 3 files:<br />

PROMPT . . . target_create_directories.sql<br />

PROMPT . . . target_create_directories.sh<br />

PROMPT . . . target_reset_indexes.sql<br />

PROMPT<br />

PROMPT Hit to continue<br />

PAUSE<br />

set heading off<br />

set feedback off<br />

set echo off<br />

set linesize 200<br />

REM ***** Create script target_create_directories.sql *****<br />

spool target_create_directories.sql<br />

select 'spool target_create_directories.lis' from dual;<br />

select 'set echo on' from dual;<br />

select 'set feedback on' from dual;<br />

select 'create or replace directory '||DIRECTORY_NAME|| ' as<br />

''/tooling/CST'||directory_path||''';'<br />

from dba_directories where directory_path like '%condir%' order by directory_name;<br />

select 'PROMPT **** End of Script (=: ****' from dual;<br />

select 'spool off' from dual;<br />

REM grants are being preserved with the REPLACE option.<br />

REM ***** Create script target_create_directories.sh *****<br />

REM Generate linux script to create directories on platform<br />

REM Execute on compute nodes as user oracle<br />

spool target_create_directories.sh<br />

select 'dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l oracle mkdir -p<br />

/tooling/CST'||directory_path<br />

from dba_directories where directory_path like '%condir%' order by directory_name;<br />

REM ***** Create script target_reset_indexes.sh *****<br />

spool target_reset_index.sql<br />

select 'spool target_reset_index.lis' from dual;<br />

select 'set echo on' from dual;<br />

select 'set feedback on' from dual;<br />

select 'alter index PROD_OWNER.'||index_name||' logging'||';' from dba_indexes where<br />

owner = 'PROD_OWNER' and logging = 'YES' order by index_name;<br />

select 'alter index PROD_OWNER.'||index_name||' noparallel'||';' from dba_indexes where<br />

owner = 'PROD_OWNER' order by index_name;<br />

select 'PROMPT **** End of Script (=: ****' from dual;<br />

select 'spool off' from dual;<br />

spool off<br />

exit<br />

14


<strong>===</strong> <strong>WORKSHOP</strong> <strong>ORACLE</strong> : <strong>PERFORM</strong> A <strong>VLDB</strong> <strong>MIGRATION</strong> <strong>WITH</strong> DATAPUMP <strong>===</strong><br />

SUPPLEMENT 3: SCRIPT SOURCE_GENERATE_SYS_GRANTS.SQL<br />

column k_time new_value d_time noprint<br />

column k_sid new_value d_sid noprint<br />

select to_char(sysdate, 'dd-mm-yyyy hh24:mi') k_time from dual;<br />

select name k_sid from v$database;<br />

spool target_sys_grants_&d_sid..sql<br />

set heading off<br />

set termout on<br />

set feedback on<br />

select 'spool target_sysgrants_&d_sid..lis' from dual;<br />

select 'grant ' || privilege || ' on ' || owner|| '.' ||table_name || ' to '<br />

||grantee|| ';' from sys.dba_tab_privs<br />

where grantee not in('SYS', 'SYSTEM',<br />

'PUBLIC','SELECT_CATALOG_ROLE','EXP_FULL_DATABASE','EXECUTE_CATALOG_ROLE',<br />

'DELETE_CATALOG_ROLE','AQ_ADMINISTRATOR_ROLE','IMP_FULL_DATABASE','AQ_USER_ROLE<br />

','DBA','OUTLN','SNMPAGENT','CTXSYS','OLAPSYS','LBACSYS','ORDSYS',<br />

'MDSYS','DMSYS','XDB','WMSYS','EXFSYS','TKPROFER','HS_ADMIN_ROLE','OEM_MONITOR'<br />

,'WM_ADMIN_ROLE','OLAP_USER','DBSNMP','XDBADMIN','GATHER_SYSTEM_STATISTICS',<br />

'LOGSTDBY_ADMINISTRATOR')<br />

and grantor='SYS';<br />

spool off;<br />

15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!