10.07.2015 Views

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 13 MIGRATING TO EXADATAdatabase import and with the INDEXFILE parameter to extract the DDL to create these objects. This stepis where the most mistakes are made. It is a tedious process and careful attention must be given so thatnothing falls through the cracks. Fortunately, there are third-party tools that do a very good job ofcomparing two databases and showing you where you’ve missed something. Most of these tools, likeTOAD and DB Change Manager from Embarcadero, also provide a feature to synchronize the objectdefinitions across to the new database.If you are still thinking about using Export/Import, note that as the data loading with Import doesn’tuse direct-path load inserts, it will have much higher CPU usage overhead due to undo and redogeneration and buffer cache management. You would also have to use a proper BUFFER parameter forarray inserts (you’ll want to insert thousands of rows at a time) and use COMMIT=Y (which will commitafter every buffer insert) so you wouldn’t fill up the undo segments with one huge insert transaction.When to Use Data Pump or Export/ImportData Pump and Export/Import are volume-sensitive operations. That is, the time it takes to move yourdatabase will be directly tied to its size and the bandwidth of your network. For OLTP applications this isdowntime. As such, it is better suited for smaller OLTP databases. It is also well suited for migrating largeDW databases, where read-only data is separated from read-write data. Take a look at the downtimerequirements of your application and run a few tests to determine whether Data Pump is a good fit.Another benefit of Data Pump and Export/Import is that they allow you to copy over all the objects inyour application schemas easily, relieving you from manually having to copy over PL/SQL packages,views, sequence definitions, and so on. It is not unusual to use Export/Import for migrating small tablesand all other schema objects, while the largest tables are migrated using a different method.What to Watch Out for when Using Data Pump or Export/ImportCharacter-set differences between the source and target databases are supported, but if you areconverting character sets make sure the character set of the source database is a subset of the target. Ifyou are importing at the schema level, check to be sure you are not leaving behind any system objects,like roles and public synonyms, or database links. Remember that HCC is only supported in Data Pump.Be sure you use the consistency parameters of Export or Data Pump to ensure that your data is exportedin a read-consistent manner. Don’t forget to take into account the load you are putting on the network.Data Pump and Export/Import methods also require you to have some temporary disk space (bothin the source and target server) for holding the dumpfiles. Note that using Data Pump’s table datacompression option requires you to have <strong>Oracle</strong> Advanced Compression licenses both for the sourceand target database (only the metadata_only compression option is included in the Enterprise Editionlicense).Copying Data over a Database LinkWhen extracting and copying very large amounts of data—many terabytes—between databases,database links may be your best option. Unlike the DataPump option, with database links you will readyour data once (from the source), transfer it immediately over the network, and write it once (into thetarget database). With Data Pump, <strong>Oracle</strong> would have to read the data from source, then write it to adumpfile, and then you’ll transfer the file with some file-transfer tool (or do the network copy operationusing NFS), read the dumpfile in the target database, and then write it into the target database tables. Inaddition to all the extra disk I/O done for writing and reading the dumpfiles, you would need extra diskspace for holding these dumpfiles during the migration. Now you might say “Hold on, DataPump does427

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!