10.07.2015 Views

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 13 MIGRATING TO EXADATA(SERVICE_NAME = SOL102)))With these settings, when the database link connection is initiated in the target (<strong>Exadata</strong>) database,the tnsnames.ora connection string additions will make the target <strong>Oracle</strong> database request a larger TCPbuffer size for its connection. Thanks to the SDU setting in the target database’s tnsnames.ora and thesource database’s sqlnet.ora, the target database will negotiate the maximum SDU size possible—32767bytes. Note: If you have done SQL*Net performance tuning in old <strong>Oracle</strong> versions, you may remember anotherSQL*Net parameter: TDU (Transmission Data Unit size). This parameter is obsolete and is ignored starting with<strong>Oracle</strong> Net8 (<strong>Oracle</strong> 8.0).It is possible to ask for different sizes for send and receive buffers. This is because during the datatransfer the bulk of data will move from source to target direction. Only some acknowledgement and“fetch more” packets are sent in the other direction. That’s why we’ve configured the send buffer largerin the source database (listener.ora) as the source will do mostly sending. On the target side(tnsnames.ora), we’ve configured the receive buffer larger as the target database will do mostly receiving.Note that these buffer sizes are still limited by the O/S-level maximum buffer size settings(net.core.rmem_max and net.core.wmem_max parameters in /etc/sysctl.conf in Linux and tcp_max_bufkernel setting in Solaris).Parallelizing Data LoadIf you choose the extract-load approach for your migration, there’s one more bottleneck to overcome incase you plan to use <strong>Exadata</strong> Hybrid Columnar Compression (EHCC). You probably want to use EHCCto save the storage space and also get better data scanning performance (compressed data means fewerbytes to read from disk). Note that faster scanning may not make your queries significantly faster if mostof your query execution time is spent in operations other than data access, like sorting, grouping, joiningand any expensive functions called either in the SELECT list or filter conditions. However, EHCCcompression requires many more CPU cycles than the classic block-level de-duplication, as the finalcompression in EHCC is performed with heavy algorithms (LZO, ZLib or BZip, depending on thecompression level). Also, while decompression can happen either in the storage cell or database layer,the compression of data can happen only in the database layer. So, if you load lots of data into a EHCCcompressedtable using a single session, you will be bottlenecked by the single CPU you’re using.Therefore you’ll need to parallelize the data load to take advantage of all the database layer’s CPUs to getthe data load done faster.Sounds simple—we’ll just add a PARALLEL flag to the target table or a PARALLEL hint into the query,and we should be all set, right? Unfortunately things are more complex than that. There are a couple ofissues to solve; one of them is easy, but the other one requires some effort.435

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!