10.07.2015 Views

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

Expert Oracle Exadata - Parent Directory

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 15 COMPUTE NODE LAYOUTmore than the aggregate ingest bandwidth of the database grid. The scan rate is therefore throttled. The queryshould indeed run faster than noncompressed, but as we can see there is more to it than simple math. Thehypothetical query cannot complete in 1/10th the time. In such a case the physical scan rate for the full rackwould be throttled back to roughly 45 GBps, and so the query would complete in approximately 11 seconds—1/6th the time as in the noncompressed case. This example does not take into consideration any other factor thatmay cause consumers to throttle producers, such as join processing and sorting. It considers only the aggregateiDB ingest bandwidth in the database grid. Other factors, such as database grid CPU-saturation (due to heavy joinand sort processing) can further throttle the flow of data over iDB. There is a direct correlation between theselectivity of a query, the host processor utilization level, and the performance improvement delivered by EHCC. Ioften have to remind people that the concept of compression as a performance feature is quite new, and oftenmisrepresented.Non-RAC ConfigurationCompute nodes may be configured in a number of ways. If your application does not need the highavailability or scale-out features of <strong>Oracle</strong> RAC, then <strong>Exadata</strong> provides an excellent platform fordelivering high performance for standalone database servers. You can manage I/O service levelsbetween independent databases by configuring the IORM. See Chapter 7 for more information aboutIORM. In a non-RAC configuration each compute node will have its own, non-clustered, ASM instancethat provides storage for all databases on that server. Even though your database servers may beindependent of one another they can still share <strong>Exadata</strong> storage (cell disks). This allows each database tomake use of the full I/O bandwidth of the <strong>Exadata</strong> storage subsystem. Note that in this configurationClusterware is not installed at all. Just like any other standalone database server, multiple databasescoexist quite nicely within the same disk groups. For example, let’s say you have three databases on yourserver, called SALES, HR, and PAYROLL. All three databases can share the same disk groups for storage. Todo this, all three databases would set their instance parameters as follows:db_create_file_dest='DATA1_DG'db_recovery_file_dest='RECO1_DG'In Figure 15-1 we see all eight compute nodes in an <strong>Exadata</strong> full rack configuration runningstandalone databases. For example, DB1, Node 1, uses the DATA1_DG and RECO1_DG disk groups, which areserviced by the local (nonclustered) ASM instance. Each ASM instance has its own set of ASM diskgroups, which consist of grid disks from all storage cells. At the storage cell, these independent ASMinstances cannot share grid disks. Each ASM instance will have its own, private set of grid disks.500

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!