21.02.2013 Views

Vijay Adik vadik@us - IBM

Vijay Adik vadik@us - IBM

Vijay Adik vadik@us - IBM

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX Performance:<br />

Configuration & Tuning for Oracle<br />

<strong>Vijay</strong> <strong>Adik</strong><br />

<strong>vadik@us</strong>.ibm.com<br />

ATS - Oracle Solutions Team<br />

April 17, 2009 © 2009 <strong>IBM</strong> Corporation


2<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Legal information<br />

The information in this presentation is provided by <strong>IBM</strong> on an "AS IS"<br />

basis without any warranty, guarantee or assurance of any kind. <strong>IBM</strong> also<br />

does not provide any warranty, guarantee or assurance that the<br />

information in this paper is free from errors or omissions. Information is<br />

believed to be accurate as of the date of publication. You should check<br />

with the appropriate vendor to obtain current product information.<br />

Any proposed use of claims in this presentation outside of the United<br />

States must be reviewed by local <strong>IBM</strong> country counsel prior to such use.<br />

<strong>IBM</strong>, ^, , RS6000, System p, AIX, AIX 5L, GPFS, and Enterprise<br />

Storage Server (ESS) are trademarks or registered trademarks of the<br />

International Business Machines Corporation.<br />

Oracle, Oracle9i and Oracle10g are trademarks or registered trademarks<br />

of Oracle Corporation.<br />

All other products or company names are used for identification<br />

purposes only, and may be trademarks of their respective owners.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


Agenda<br />

3<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– CPU<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


4<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX Configuration Best Practices for Oracle<br />

� The suggestions presented here are considered to<br />

be basic configuration “starting points” for<br />

general Oracle workloads<br />

� Your workloads may vary<br />

� Ongoing performance monitoring and tuning is<br />

recommended to ensure that the configuration is<br />

optimal for the particular workload characteristics<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


5<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Performance Overview – Tuning Methodology<br />

Iterative Tuning Process<br />

�Stress System (i.e., Tune at Peak workload)<br />

�Monitor Sub-Systems<br />

�Identify Predominant Bottleneck<br />

�Tune Bottleneck<br />

�Repeat<br />

CPU Memory Network I/O<br />

© 2009 <strong>IBM</strong> Corporation<br />

Predominant Bottleneck<br />

• Understand the external view of system performance<br />

The external view of system performance is the<br />

observable event that is causing someone to say<br />

the system is performing poorly. Typically, (1)<br />

end-user response time, (2) application (or task)<br />

response time or (3) throughput. Should not use<br />

system metrics to judge improvement.<br />

• Performance only improves when the predominant<br />

bottleneck is fixed<br />

Fixing a secondary bottleneck will not improve<br />

performance and typically results in overloading an<br />

already overloaded predominant bottleneck.<br />

• Monitor Performance after a change – Tuning is an<br />

iterative process<br />

Monitoring is required after making a change for<br />

two reasons (1) Fixing the predominant bottleneck<br />

typically uncovers another bottleneck, and (2) Not<br />

all changes yield a positive results. If possible you<br />

should have a “repeatable” test to so change can<br />

be accurately evaluated.<br />

• End-User Response time is the elapsed time between when a user submits a request and receives a response.<br />

• Application Response time is the elapsed required for one or more jobs to complete. Historically, these jobs have been called batch jobs.<br />

• Throughput is the amount of work that can be accomplished per unit time. This metric is typically expressed in terms of transaction per minute.<br />

April 17, 2009


6<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Performance Monitoring and Tuning Tools<br />

Status<br />

Commands<br />

Monitor<br />

Commands<br />

Trace Level<br />

Commands<br />

Tuning tools<br />

CPU<br />

vmstat, topas,<br />

iostat, ps,<br />

mpstat,<br />

lparstat, sar,<br />

time/timex,<br />

emstat/alstat<br />

netpmon<br />

tprof, curt,<br />

splat, trace,<br />

trcrpt<br />

schedo, fdpr,<br />

bindprocessor<br />

, bindintcpu,<br />

nice/renice,<br />

setpri<br />

© 2009 <strong>IBM</strong> Corporation<br />

Memory<br />

vmstat, topas,<br />

ps, lsps, ipcs<br />

svmon,<br />

netpmon,<br />

filemon<br />

trace,trcrpt<br />

vmo,<br />

rmss,fdpr,<br />

chps/mkps<br />

I/O<br />

Subsystem<br />

vmstat, topas,<br />

iostat,<br />

lvmstat, lsps,<br />

lsattr/lsdev,<br />

lspv/lsvg/lslv<br />

fileplace,<br />

filemon<br />

trace, trcrpt<br />

ioo, lvmo,<br />

chdev,<br />

migratepv,chl<br />

v, reorgvg<br />

Network<br />

netstat, topas,<br />

atmstat,<br />

entstat,<br />

tokstat,<br />

fddistat,<br />

nfsstat, ifconfig<br />

netpmon,<br />

tcpdump<br />

iptrace,<br />

ipreport, trace,<br />

trcrpt<br />

no,<br />

chdev,ifconfig<br />

April 17, 2009<br />

Processes &<br />

Threads<br />

ps, pstat,<br />

topas,<br />

emstat/alstat<br />

svmon, truss,<br />

kdb, dbx,<br />

gprof, kdb,<br />

fuser, prof<br />

truss, pprof<br />

curt, splat,<br />

trace, trcrpt<br />

nfso,chdev


Agenda<br />

7<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– CPU<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


8<br />

Advanced Technical Support – System p<br />

AIX Memory Management Overview<br />

� The role of Virtual Memory Manager (VMM) is to provide the capability for programs to address<br />

more memory locations than are actually available in physical memory.<br />

� On AIX this is accomplished using segments that are partitioned into fixed sizes called “pages”.<br />

– A segment is 256M<br />

– default page size 4K<br />

– POWER 4+ and POWER5 can define large pages, which are 16M<br />

� The 32-bit or 64-bit address translates into a 52-bit or 80-bit virtual address<br />

– 32-bit system : 4-bit segment register that contains a 24-bit segment id, and 28-bit offset.<br />

• 24-bit segment id + 28-bit offset = 52-bit VA<br />

– 64-bit system: 32-bit segment register that contains a 52-bit segment id, and 28-bit offset.<br />

• 52-bit segment id + 28-bit offset = 80-bit VA<br />

� The VMM maintains a list of free frames that can be used to retrieve pages that need to be<br />

brought into memory.<br />

– The VMM replenishes the free list by removing some of the current pages from real memory (i.e., steal<br />

memory).<br />

– The process of moving data between memory and disk is called “paging”.<br />

� The VMM uses a Page Replacement Algorithm (implemented in the lrud kernel threads) to select<br />

pages that will be removed from memory.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


9<br />

Advanced Technical Support – System p<br />

Virtual Memory Space – 64 Bits<br />

Segments IDs<br />

0<br />

.<br />

Virtual Memory<br />

1 Trillion Terabytes or 1 Yotta byte<br />

© 2009 <strong>IBM</strong> Corporation<br />

36-bits selects Segment Register 28-bits offset within Segment 64-bit Address<br />

Each Segment Register contains<br />

a 52-bit Segment ID<br />

Kernel Segment<br />

Page Space Disk Map<br />

Kernel Heap<br />

Segment is divided into 4096 byte chunks called pages<br />

Each Segment can have a maximum of<br />

65536 pages<br />

256 Mbyte Segment<br />

52-bit Segment Id + 28-bit offset = 80-bit Virtual Address<br />

28-bit offset – to access a<br />

specific location in the<br />

segment<br />

April 17, 2009<br />

2 28 = 256M


10<br />

Advanced Technical Support – System p<br />

Memory Tuning Overview<br />

�minfree<br />

�maxfree<br />

Virtual Memory<br />

(General)<br />

�lru_file_repage<br />

�lru_poll_interval<br />

© 2009 <strong>IBM</strong> Corporation<br />

�maxperm<br />

JFS<br />

�strict_maxperm<br />

Memory:<br />

�maxclient<br />

Enhanced JFS<br />

(JFS2)<br />

�strict_maxclient<br />

vmo –p –o =<br />

-p flags updates /etc/tunables/nextboot<br />

April 17, 2009<br />

Large Pages<br />

(Pinned Memory 1 )<br />

�v_pinshm<br />

�lgpg_regions<br />

�lgpg_size<br />

NAME CUR DEF BOOT MIN MAX UNIT TYPE<br />

-------------------------------------------------------------------------------lru_file_repage<br />

1 1 1 0 1 boolean D<br />

lru_poll_interval 0 0 0 0 60000 milliseconds D<br />

maxclient% 80 80 80 1 100 % memory D<br />

maxfree 1088 1088 1088 8 200K 4KB pages D<br />

maxperm% 80 80 80 1 100 % memory D<br />

minfree 960 960 960 8 200K 4KB pages D<br />

strict_maxclient 1 1 1 0 1 boolean D<br />

strict_maxperm 0 0 0 0 1 boolean D<br />

minperm% 20 20 20 1 100 % memory D


11<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Virtual Memory Manager (VMM) Tuning<br />

� The AIX “vmo” command provides for the display and/or<br />

update of several parameters which influence the way AIX<br />

manages physical memory<br />

– The “-a” option displays current parameter settings<br />

� vmo –a<br />

– The “-o” option is used to change parameter values<br />

� vmo –o minfree=1440<br />

– The “-p” option is used to make changes persist across a reboot<br />

� vmo –p –o minfree=1440<br />

On AIX 5.3, number of the default “vmo” settings are not optimized for<br />

database workloads and should be modified for Oracle environments<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


12<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Kernel Parameter Tuning – AIX 6.1<br />

� AIX 6.1 configured by default to be ‘correct’ for most<br />

workloads.<br />

� Many tunable are classified as ‘Restricted’:<br />

– Only change if AIX Support says so<br />

– Parameters will not be displayed unless the ‘-F’ option is used<br />

for commands like vmo, no, ioo, etc.<br />

� When migrating from AIX 5.3 to 6.1, parameter override<br />

settings in AIX 5.3 will be transferred to AIX 6.1 environment<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


13<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

General Memory Tuning<br />

� Two primary categories of memory pages: Computational and File System<br />

� AIX will always try to utilize all of the physical memory available (subject to<br />

vmo parameter settings)<br />

– What is not required to support current computational page demand will tend to be used for<br />

filesystem cache<br />

– Raw Devices and filesystems mounted (or individual files opened) in DIO/CIO mode do not<br />

use filesystem cache<br />

100<br />

90<br />

80<br />

70<br />

60<br />

50<br />

40<br />

30<br />

20<br />

10<br />

0<br />

08:02<br />

08:06<br />

08:10<br />

08:14<br />

08:18<br />

08:22<br />

© 2009 <strong>IBM</strong> Corporation<br />

08:26<br />

08:30<br />

08:34<br />

08:38<br />

08:42<br />

08:46<br />

08:50<br />

08:54<br />

Memory Use 9/20/2007<br />

08:58<br />

Process% FScache%<br />

09:02<br />

09:06<br />

09:10<br />

09:14<br />

09:18<br />

09:22<br />

09:26<br />

09:30<br />

09:34<br />

09:38<br />

09:42<br />

09:46<br />

09:50<br />

09:54<br />

09:58<br />

10:02<br />

10:06<br />

April 17, 2009<br />

10:10<br />

10:14


14<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

VMM Tuning<br />

Definitions:<br />

� LRUD= VMM page stealing process (LRU Daemon) – 1 per Memory Pool<br />

� numperm, numclient = used fs buffer cache pages, seen in ‘vmstat –v’<br />

� minperm = target minimum number of pages for fs buffer cache<br />

� maxperm, maxclient = target maximum number of pages for fs buffer cache<br />

Parameters:<br />

� MINPERM% = target min % real memory for fs buffer cache<br />

� MAXPERM%, MAXCLIENT% = target max % real memory for fs buffer cache<br />

� MINFREE = target minimum number of free memory pages<br />

� MAXFREE = target maximum number of free memory pages<br />

When does LRUD start?<br />

� When total free pages (in a given memory pool) < MINFREE<br />

� When (maxclient pages - numclient) < MINFREE<br />

When does LRUD stop?<br />

� When total free pages > MAXFREE<br />

� When (maxclient pages – numclient) > MAXFREE<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


15<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

VMM Tuning starting points (AIX 5.2ML4+ or later)<br />

LRU_FILE_REPAGE=0 (default for 5.3 and 6.1)<br />

tells LRUD to page out file pages (filesystem buffer cache) rather than<br />

computational pages when numperm > minperm<br />

LRU_POLL_INTERVAL=10 (default for 5.3 and 6.1)<br />

indicates the time period (in milliseconds) after which LRUD pauses and<br />

interrupts can be serviced. Default value of “0” means no preemption.<br />

MINPERM%=3 (default for 6.1)<br />

MAXPERM%, MAXCLIENT%=90* (default for 6.1)<br />

STRICT_MAXPERM=0* (default for 5.3 and 6.1)<br />

STRICT_MAXCLIENT=1 (default for 5.3 and 6.1)<br />

* In AIX 5.2 environments with large physical memory, set MAXPERM%, MAXCLIENT% = (2400 / Phys<br />

Memory (GB)) and STRICT_MAXPERM=1<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


18<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Virtual Memory Management (VMM) Thresholds<br />

P h y s ic a l M e m o ry<br />

100%<br />

80%<br />

60%<br />

40%<br />

20%<br />

0%<br />

© 2009 <strong>IBM</strong> Corporation<br />

Time<br />

numperm% comp% Free% maxperm%<br />

maxfree minfree minperm%<br />

Start stealing pages when<br />

free memory below minfree<br />

Stop stealing pages when<br />

free memory above maxfree<br />

When numperm% ><br />

maxperm%, steal only file<br />

system pages<br />

When minperm% <<br />

numperm% < maxperm%,<br />

steal file system or<br />

computation pages,<br />

depending on repage rate<br />

When numperm% <<br />

minperm%, steal both file<br />

system and computational<br />

pages<br />

April 17, 2009


19<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

VMM Page Stealing Thresholds<br />

� Minfree/maxfree values are per memory pool in AIX 5.3 and 6.1<br />

– Total system minfree = minfree * # of memory pools<br />

– Total system maxfree = maxfree * # of memory pools<br />

Most workloads do not require minfree/maxfree changes in AIX 5.3 or 6.1<br />

� minfree<br />

AIX 5.3/6.1: minfree = max(960,120 x # logical CPUs /#mem pools)<br />

AIX 5.2: minfree = 120 x # logical CPUs<br />

Consider increasing if vmstat “fre” column frequently approaches zero or if “vmstat –s” shows significant “free<br />

frame waits”<br />

� maxfree<br />

AIX 5.3/6.1: maxfree = max(1088,minfree + (MAX(maxpgahead, j2_maxPageReadAhead) *<br />

# logical CPUs)/ # mem pools)<br />

AIX 5.2: maxfree = minfree + (MAX(maxpgahead, j2_maxPageReadAhead) *<br />

# logical CPUs<br />

Example:<br />

� For a 6-way LPAR with SMT enabled, maxpgahead=8 and j2_maxPageReadAhead=8:<br />

– minfree = 360 = 120 x 6 x 2 / 4<br />

– maxfree = 1536 = 1440 + (max(8,8) x 6 x 2)<br />

� vmo –o minfree=1440 –o maxfree=1536 -p<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


21<br />

Advanced Technical Support – System p<br />

Page Steal Method<br />

� Historically, AIX maintained a single LRU list which contains<br />

both computational and filesystem pages.<br />

– In environments with lots of computational pages that you want to<br />

keep in memory, LRUD may have to spend a lot of time scanning the<br />

LRU list to find an eligible filesystem page to steal<br />

� AIX 6.1 introduced the ability to maintain separate LRU lists<br />

for computational vs. filesystem pages.<br />

– Also backported to AIX 5.3<br />

� New page_steal_method parameter<br />

– Enabled (1) by default in 6.1, disabled (0) by default in 5.3<br />

– Requires a reboot to change<br />

– Recommended for Oracle DB environments (both AIX 5.3 and 6.1)<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


22<br />

Advanced Technical Support – System p<br />

Understanding Memory Pools<br />

� Memory cards are associated with every Multi<br />

Chip Module (MCM), Dual Core Module (DCM) or<br />

Quad Core Module (QCM) in the server<br />

– The Hypervisor assigns physical CPUs to a dedicated CPU<br />

LPAR (or shared processor pool) from one or more MCMs,<br />

DCMs or DCMs<br />

– For a given LPAR, there will normally be at least 1 memory pool<br />

for each MCM, DCM or QCM that has contributed processors to<br />

that LPAR or shared processor pool<br />

� By default, memory for a process is allocated<br />

from memory associated with the processor that<br />

caused the page fault.<br />

� Memory pool configuration is influenced by the<br />

VMO parameter “memory_affinity”<br />

– Memory_affinity=1 means configure memory pools based on<br />

physical hardware configuration (DEFAULT)<br />

– Memory_affinity=0 means configure roughly uniform memory<br />

pools from any physical location<br />

� Number can be seen with ‘vmstat –v |grep pools’<br />

� Size can only be seen using KDB<br />

� LRUD operates per memory pool<br />

© 2009 <strong>IBM</strong> Corporation<br />

p590 / p595 MCM Architecture<br />

April 17, 2009


23<br />

Advanced Technical Support – System p<br />

Memory Affinity…<br />

� Not generally a benefit unless processes are bound to a particular processor<br />

� It can exacerbate any page replacement algorithm issues (e.g. system paging or excessive LRUD<br />

scanning activity) if memory pool sizes are unbalanced<br />

� If there are paging or LRUD related issues, try basic vmo parameter or Oracle SGA/PGA tuning first<br />

� If issues remain, use ‘kdb’ to check if memory pool sizes are unbalanced:<br />

KDB(1)> memp *<br />

VMP MEMP NB_PAGES FRAMESETS NUMFRB<br />

memp_frs+010000 00 000 00B1F9F4 000 001 00B073DE<br />

memp_frs+010780 00 003 00001BBC 006 007 00000000<br />

memp_frs+010280 01 001 00221C80 002 003 0021C3CB<br />

memp_frs+010500 02 002 00221C80 004 005 0021CDDE<br />

� If the pool sizes are not balanced, consider disabling Memory Affinity:<br />

# vmo –r –o memory_affinity=0 (requires a reboot)<br />

– IY73792 required for 5300-01 and 5300-02<br />

– Code changes in 5.3 TL5/TL6 solved most memory affinity issues<br />

– Memory_affinity is also a “Restricted” tunable in AIX 6.1<br />

© 2009 <strong>IBM</strong> Corporation<br />

Pages in pool<br />

Free pages<br />

April 17, 2009


24<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX 5.3/6.1 Multiple Page Size Support<br />

� AIX 5.3 5300-04 introduces two new page sizes:<br />

– 64K<br />

– 16M (large pages) (available since Power 4+)<br />

– 16G (huge pages)<br />

• Requires p5+ hardware<br />

• Requires p5 System Release 240, Service Level 202 microcode<br />

• 16MB support requires Version 5 Release 2 of the Hardware Management Console<br />

(HMC) machine code<br />

� User/Application must request preferred page size<br />

– 64K page size is very promising, since they do not need to be<br />

configured/reserved in advance or pinned<br />

• export LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STATSPACK=64K<br />

oracle* to use the 64K pagesize for stack, data & text<br />

– Will require Oracle to explicitly request the page size (10.2.0.4 & up)<br />

– If preferred size not available, the largest available smaller size will be used<br />

• Current Oracle versions will end up using 64KB pages even if SGA is not pinned<br />

� Refer: http://www-<br />

03.ibm.com/systems/resources/systems_p_os_aix_whitepapers_multiple_<br />

page.pdf<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


25<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Large Page Support (optional)<br />

Pinning shared memory<br />

� AIX Parameters<br />

• vmo –p –o v_pinshm = 1<br />

• Leave maxpin% at the default of 80% unless the SGA exceeds 77% of real memory<br />

– Vmo –p –o maxpin%=[(total mem-SGA size)*100/total mem] + 3<br />

� Oracle Parameters<br />

• LOCK_SGA = TRUE<br />

Enabling Large Page Support<br />

� vmo –r –o lgpg_size = 16777216 –o lgpg_regions=(SGA size / 16 MB)<br />

Allowing Oracle to use Large Pages<br />

� chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle<br />

Using Monitoring Tools<br />

� svmon –G<br />

� svmon –P<br />

Oracle metalink note# 372157.1<br />

Note: It is recommended not to pin SGA, as long as you had configured the VMM, SGA & PGA<br />

properly.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


26<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Determining SGA size<br />

SGA Memory Summary for DB: test01 Instance: test01 Snaps: 1046 -1047<br />

SGA regions Size in Bytes<br />

------------------------------ ----------------<br />

Database Buffers 16,928,210,944<br />

Fixed Size 768,448<br />

Redo Buffers 2,371,584<br />

Variable Size 1,241,513,984<br />

© 2009 <strong>IBM</strong> Corporation<br />

----------------<br />

sum 18,172,864,960<br />

lgpg_regions = 18,172,864,960 / 16,777,216 = 1084 (rounded up)<br />

April 17, 2009


27<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX Paging Space<br />

Allocate Paging Space:<br />

� Configure Server/LPAR with enough physical memory to satisfy memory requirements<br />

� With AIX demand paging, paging space does not have to be large<br />

� Provides safety net to prevent system crashes when memory overcommitted.<br />

� Generally, keep within internal drive or high performing SAN storage<br />

Monitor paging activity:<br />

� vmstat -s<br />

� sar -r<br />

� nmon<br />

Resolve paging issues:<br />

� Reduce file system cache size (MAXPERM, MAXCLIENT)<br />

� Reduce Oracle SGA or PGA (9i or later) size<br />

� Add physical memory<br />

Do not over commit real memory!<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


28<br />

Advanced Technical Support – System p<br />

Tuning and Improving System Performance<br />

� Adjust the VMM Tuning Parameters<br />

– Key parameters listed on word document<br />

� Implement VMM related Mount Options<br />

– DIO / CIO<br />

– Release behind or read and/or write<br />

� Reduce Application Memory Requirements<br />

� Memory Model<br />

– %Computational < 70% - Large Memory Model – Goal is to adjust tuning<br />

parameters to prevent paging<br />

• Multiple Memory pools<br />

• Page Space smaller than Memory<br />

• Must Tune VMM key parameters<br />

– %Computational > 70% - Small Memory Model – Goal is to make paging as efficient<br />

as possible<br />

• Add multiple page spaces on different spindles<br />

• Make all pages space the same size to ensure round-robin scheduling<br />

• PS = 1.5 computational requirements<br />

• Turn off DEFPS<br />

• Memory Load Control<br />

� Add additional Memory<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


Agenda<br />

29<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– CPU<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


30<br />

Advanced Technical Support – System p<br />

CPU Considerations<br />

Oracle Parameters based on the # of CPUs<br />

– DB_WRITER_PROCESSES<br />

– Degree of Parallelism<br />

• user level<br />

• table level<br />

• query level<br />

• MAX_PARALLEL_SERVERS or<br />

AUTOMATIC_PARALLEL_TUNING (CPU_COUNT *<br />

PARALLEL_THREADS_PER_CPU)<br />

– CPU_COUNT<br />

– FAST_START_PARALLEL_ROLLBACK – should be using UNDO<br />

instead<br />

– CBO – execution plan may be affected; check explain plan<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


32<br />

Advanced Technical Support – System p<br />

Lparstat command<br />

# lparstat -i<br />

• Node Name : erpcc8<br />

• Partition Name : -<br />

• Partition Number : -<br />

• Type : Dedicated<br />

• Mode : Capped<br />

• Entitled Capacity : 4.00<br />

• Partition Group-ID : -<br />

• Shared Pool ID : -<br />

• Online Virtual CPUs : 4<br />

• Maximum Virtual CPUs : 4<br />

• Minimum Virtual CPUs : 1<br />

• Online Memory : 8192 MB<br />

• Maximum Memory : 9216 MB<br />

• Minimum Memory : 128 MB<br />

• Variable Capacity Weight : -<br />

• Minimum Capacity : 1.00<br />

• Maximum Capacity : 4.00<br />

• Capacity Increment : 1.00<br />

• Maximum Physical CPUs in system : 4<br />

• Active Physical CPUs in system : 4<br />

• Active CPUs in Pool : -<br />

• Unallocated Capacity : -<br />

• Physical CPU Percentage : 100.00%<br />

• Unallocated Weight : -<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


33<br />

Advanced Technical Support – System p<br />

CPU Considerations<br />

� Use SMT with AIX 5.3/Power5 (or later) environments<br />

� Micro-partitioning Guidelines<br />

– Virtual CPUs = capping limit<br />

UNCAPPED<br />

– Virtual CPUS should be set to the max peak demand requirement<br />

– Entitlement >= Virtual CPUs / 3<br />

� DLPAR considerations<br />

Oracle 9i<br />

– Oracle CPU_COUNT does not recognize change in # cpus<br />

– AIX scheduler can still use the added CPUs<br />

Oracle 10g<br />

– Oracle CPU_COUNT recognizes change in # cpus<br />

� Max CPU_COUNT limited to 3x CPU_COUNT at instance startup<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


35<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

CPU: Compatibility Matrix<br />

AIX 5.2<br />

AIX 5.3<br />

AIX 6.1<br />

Oracle 9i<br />

Oracle 10g<br />

Oracle 11g<br />

© 2009 <strong>IBM</strong> Corporation<br />

SMT<br />

DLPAR<br />

Micro-<br />

Partition<br />

Note: Oracle RAC 10.2.0.3 on VIOS 1.3.1.1 & AIX 5.3 TL07 and higher are certified<br />

April 17, 2009


Agenda<br />

36<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– CPU<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


37<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

The AIX IO stack<br />

Raw disks<br />

Raw LVs<br />

© 2009 <strong>IBM</strong> Corporation<br />

Application<br />

Logical file system<br />

JFS JFS2 NFS Other<br />

VMM<br />

LVM (LVM device drivers)<br />

Multi-path IO driver (optional)<br />

Disk Device Drivers<br />

Adapter Device Drivers<br />

Disk subsystem (optional)<br />

Disk<br />

Application memory area caches data to<br />

avoid IO<br />

NFS caches file attributes<br />

NFS has a cached filesystem for NFS clients<br />

JFS and JFS2 cache use extra system RAM<br />

JFS uses persistent pages for cache<br />

JFS2 uses client pages for cache<br />

Queues exist for both adapters and disks<br />

Adapter device drivers use DMA for IO<br />

Disk subsystems have read and write cache<br />

Disks have memory to store commands/data<br />

Write cache Read cache or memory area used for IO<br />

IOs can be coalesced (good) or split up (bad) as they go thru the IO stack<br />

April 17, 2009


40<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX Filesystems Mount options<br />

� Journaled File System (JFS)<br />

Better for lots of small file creates & deletes<br />

– Buffer caching (default) provides Sequential Read-Ahead, cached writes, etc.<br />

– Direct I/O (DIO) mount/open option � no caching on reads<br />

� Enhanced JFS (JFS2)<br />

Better for large files/filesystems<br />

– Buffer caching (default) provides Sequential Read-Ahead, cached writes, etc.<br />

– Direct I/O (DIO) mount/open option � no caching on reads<br />

– Concurrent I/O (CIO) mount/open option � DIO, with write serialization<br />

disabled<br />

• Use for Oracle .dbf, control files and online redo logs only!!!<br />

� GPFS<br />

Clustered filesystem – the <strong>IBM</strong> filesystem for RAC<br />

– Non-cached, non-blocking I/Os (similar to JFS2 CIO) for all Oracle files<br />

GPFS and JFS2 with CIO offer similar performance as Raw Devices<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


41<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

AIX Filesystems Mount options (Cont’d)<br />

�Direct IO (DIO) – introduced in AIX 4.3.<br />

• Data is transfered directly from the disk to the<br />

application buffer, bypassing the file buffer<br />

cache hence avoiding double caching<br />

(filesystem cache + Oracle SGA).<br />

• Emulates a raw-device implementation.<br />

�To mount a filesystem in DIO<br />

$ mount –o dio /data<br />

�Concurrent IO (CIO) – introduced with JFS2 in<br />

AIX 5.2 ML1<br />

• Implicit use of DIO.<br />

• No Inode locking : Multiple threads can<br />

perform reads and writes on the same file at<br />

the same time.<br />

• Performance achieved using CIO is<br />

comparable to raw-devices.<br />

�To mount a filesystem in CIO:<br />

$ mount –o cio /data<br />

© 2009 <strong>IBM</strong> Corporation<br />

Bench throughput over run duration – higher<br />

tps indicates better performance.<br />

April 17, 2009


43<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

CIO/DIO implementation Advices<br />

Oracle bin and shared lib.<br />

Oracle Datafiles<br />

Oracle Redolog<br />

Oracle Archivelog<br />

Oracle Control files<br />

© 2009 <strong>IBM</strong> Corporation<br />

with Standard mount options with optimized mount options<br />

mount -o rw mount -o rw<br />

Cached by AIX Cached by AIX<br />

mount -o rw mount -o cio * (1)<br />

Cached by Oracle<br />

Cached by AIX<br />

Cached by Oracle<br />

mount -o rw mount -o cio (jfs2 + agblksize=512)<br />

Cached by Oracle<br />

Cached by AIX<br />

Cached by Oracle<br />

mount -o rw mount -o rbrw<br />

Cached by AIX<br />

Use JFS2 write-behind …<br />

but are not kept in AIX Cache.<br />

mount -o rw mount -o rw<br />

Cached by AIX<br />

Cached by AIX<br />

*(1) : to avoid demoted IO : jfs2 agblksize = Oracle DB block size / n<br />

April 17, 2009


44<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

CIO Demotion and Filesystem Block Size<br />

Data Base Files (DBF)<br />

� If db_block_size = 2048 � set agblksize=2048<br />

� If db_block_size >= 4096 � set agblksize=4096<br />

Online redolog files & control files<br />

� Set agblksize=512 and use CIO or DIO<br />

Mount Filesystems with “noatime” option<br />

� AIX/Linux records information about when files were created and last<br />

modified as well as last accessed. This may lead to significant I/O<br />

performance problems on often accessed frequently changing files<br />

such as the contents of the /var/spool directory.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


45<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

I/O Tuning (ioo)<br />

READ-AHEAD (Only applicable to JFS/JFS2 with caching enabled)<br />

MINPGAHEAD (JFS) or j2_minPageReadAhead (JFS2)<br />

– Default: 2<br />

– Starting value: MAX(2,DB_BLOCK_SIZE / 4096)<br />

MAXPGAHEAD (JFS) or j2_maxPageReadAhead (JFS2)<br />

– Default: 8 (JFS), 128 (JFS2)<br />

– Set equal to (or multiple of) size of largest Oracle I/O request<br />

• DB_BLOCK_SIZE * DB_FILE_MULTI_BLOCK_READ_COUNT<br />

Number of buffer structures per filesystem:<br />

NUMFSBUFS:<br />

– Default: 196, Starting Value: 568<br />

j2_nBufferPerPagerDevice (j2_dynamicBufferPreallocation replaces)<br />

– Default: 512, Starting Value: 2048<br />

Monitor with “vmstat –v”<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


46<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Data Layout for Optimal I/O Performance<br />

Stripe and mirror everything (SAME) approach:<br />

�Goal is to balance I/O activity across all disks, loops, adapters, etc...<br />

�Avoid/Eliminate I/O hotspots<br />

�Manual file-by-file data placement is time consuming, resource intensive and iterative<br />

Use RAID-5 or RAID-10 to create striped LUNs (hdisks)<br />

Create AIX Volume Group(s) (VG) w/ LUNs from multiple<br />

arrays, striping on the front end as well for maximum<br />

distribution<br />

�Physical Partition Spreading (mklv –e x) –or-<br />

�Large Grained LVM striping (>= 1MB stripe size)<br />

http://www-1.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100319<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


47<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Data Layout cont’d…<br />

Stripe using Logical Volume (LV) or Physical Partition (PP) striping<br />

� LV Striping<br />

– Oracle recommends stripe width of a multiple of<br />

• Db_block_size * db_file_multiblock_read_count<br />

• Usually around 1 MB<br />

– Valid LV Strip sizes:<br />

• AIX 5.2: 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, 1 MB<br />

• AIX 5.3: AIX 5.2 Stripe sizes + 2M, 4M, 16 MB, 32M, 64M, 128M<br />

– Use AIX Logical Volume 0 offset (9i Release 2 or later)<br />

• Use Scalable Volume Groups (VGs), or use “mklv –T O” with Big VGs<br />

• Requires AIX APAR IY36656 and Oracle patch (bug 2620053)<br />

� PP Striping<br />

– Use minimum Physical Partition (PP) size (mklv -t, -s parms)<br />

• Spread AIX Logical Volume (LV) PPs across multiple hdisks in VG<br />

(mklv –e x)<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


48<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Tuning and Improving System Performance<br />

� Adjust the key IOO Tuning Parameters<br />

� Adjust device specific tuning Parameters<br />

� Other I/O tuning Options<br />

– DIO / CIO<br />

– Release behind or read and/or write<br />

– IO Pacing<br />

– Write Behind<br />

� Improve the data layout<br />

� Add additional hardware resources<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


50<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Other I/O Stack Tuning Options (PBUFS)<br />

� When vmstat –v shows increasing “pending disk I/Os blocked with<br />

no pbuf” values<br />

lvmo:<br />

� max_vg_pbuf_count (lvmo) = maximum number of pbufs that may be<br />

used for the VG.<br />

� pv_pbuf_count (lvmo) = the # of pbufs that are added when a PV is<br />

added to the VG.<br />

ioo:<br />

� pv_min_pbuf = The minimum # of pbufs per PV used by LVM<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


52<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Other I/O Stack Tuning Options (Device Level)<br />

lsattr/chdev:<br />

� num_cmd_elems = maximum number of outstanding I/Os for an adapter.<br />

� queue_depth = the maximum # of outstanding I/Os for an hdisk.<br />

Recommended/supported maximum is storage subsystem dependent.<br />

� max_xfer_size = the maximum allowable I/O transfer size (default is 0x100000 or 256k).<br />

Maximum supported value is storage subsystem dependent. Increasing value (to at<br />

least 0x200000) will also increase DMA size from 16 MB to 256 MB.<br />

� dyntrk = When set to yes (recommended), allows for immediate re-routing of I/O<br />

requests to an alternative path when a device ID (N_PORT_ID) change has been<br />

detected.<br />

� fc_err_recov = When set to “fast_fail” (recommended), if the driver receives an RSCN<br />

notification from the switch, the driver will check to see if the device is still on the<br />

fabric and will flush back outstanding I/Os if the device is no longer found.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


54<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Asynchronous I/O for filesystem environments<br />

AIX parameters<br />

minservers = minimum # of AIO server processes (system wide)<br />

– AIX 5.3 default = 1 (system wide), 6.1 default = 3 (per CPU)<br />

maxservers = maximum # of AIO server processes<br />

– AIX 5.3 default = 10 (per CPU), 6.1 default = 30 (per CPU)<br />

maxreqs = maximum # of concurrent AIO requests<br />

– AIX 5.3 default = 4096, 6.1 default = 65536<br />

“enable” at system restart (Always enabled for 6.1)<br />

Typical 5.3 settings: minservers=100, maxservers=200, maxreqs=65536<br />

– Raw Devices or ASM environments use fastpath AIO<br />

> above parameters do not apply<br />

> lsattr -El aio0 and look for the value "fastpath", which should be enabled<br />

– CIO uses fastpath AIO in AIX 6.1<br />

– For CIO fastpath AIO in 5.3 TL5+, set fsfastpath=1<br />

> not persistent across reboot<br />

Oracle parameters<br />

disk_asynch_io = TRUE<br />

filesystemio_options = {ASYNCH | SETALL}<br />

db_writer_processes (let default)<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


55<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

IO : Asynchronous IO (AIO)<br />

• Allows multiple requests to be sent without to have to wait until the disk subsystem has<br />

completed the physical IO.<br />

• Utilization of asynchronous IO is strongly advised whatever the type of file-system and mount<br />

option implemented (JFS, JFS2, CIO, DIO).<br />

Application<br />

�Posix vs Legacy<br />

Since AIX5L V5.3, two types of AIO are now available : Legacy and Posix. For the moment, the<br />

Oracle code is using the Legacy AIO servers.<br />

© 2009 <strong>IBM</strong> Corporation<br />

1<br />

2<br />

aio<br />

Q<br />

3 4<br />

aioservers<br />

April 17, 2009<br />

Disk


56<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

IO : Asynchronous IO (AIO) fastpath<br />

With fast_path, IO are queued directly from the application into the LVM layer without any<br />

“aioservers kproc” operation.<br />

� Better performance compare to non-fast_path<br />

� No need to tune the min and max aioservers<br />

� No ioservers proc. => “ps –k | grep aio | wc –l” is not relevent, use “iostat –A” instead<br />

Application AIX Kernel<br />

Disk<br />

• Raw Devices / ASM :<br />

� check AIO configuration with : lsattr –El aio0<br />

enable asynchronous IO fast_path. :<br />

AIX 5L : chdev -a fastpath=enable -l aio0 (default since AIX 5.3)<br />

AIX 6.1 : ioo –p –o aio_fastpath=1 (default setting)<br />

• FS with CIO/DIO and AIX 5.3 TL5+ :<br />

� Activate fsfast_path (comparable to fast_path but for FS + CIO/DIO)<br />

AIX 5L : adding the following line in /etc/inittab: aioo:2:once:aioo –o fsfast_path=1<br />

AIX 6.1 : ioo –p –o aio_fsfastpath=1 (default setting)<br />

© 2009 <strong>IBM</strong> Corporation<br />

1<br />

2<br />

3<br />

April 17, 2009


57<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Asynchronous I/O for filesystem environments…<br />

Monitor Oracle usage:<br />

• Watch alert log and *.trc files in BDUMP directory for warning message:<br />

“Warning “lio_listio returned EAGAIN”<br />

� If warning messages found, increase maxreqs and/or maxservers<br />

Monitor from AIX:<br />

• “pstat –a | grep aios”<br />

• Use “-A” option for NMON<br />

• iostat –Aq (new in AIX 5.3)<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


58<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

GPFS I/O Related Tunables<br />

� Refer Metalink note 302806.1<br />

Async I/O:<br />

� Oracle parameter filesystemio_options is ignored<br />

� Set Oracle parameter disk_asynch_io=TRUE<br />

� Prefetchthreads= exactly what the name says<br />

– Usually set prefetchthreads=64 (the default)<br />

� Worker1threads = GPFS asynch I/O<br />

– Set worker1threads=550-prefetchthreads<br />

� Set aio maxservers=(worker1threads/#cpus) + 10<br />

Other settings:<br />

� GPFS block size is configurable; most will use 512KB-1MB<br />

� Pagepool – GPFS fs buffer cache, not used for RAC but may be for<br />

binaries. Default=64M<br />

mmchconfig pagepool=100M<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


59<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

I/O Pacing<br />

� I/O Pacing parameters can be used to prevent large I/O<br />

streams from monopolizing CPUs<br />

– System backups (mksysb)<br />

– DB backups (RMAN, Netbackup)<br />

– Software patch updates<br />

� When Oracle ClusterWare is used, use AIX 6.1 Defaults:<br />

– chgsys -l sys0 -a maxpout=8193 minpout=4096 (AIX 6.1 defaults)<br />

– nfso –o nfs_iopace_pages=1024 (AIX 6.1 defaults)<br />

– On the Oracle clusterware set : crsctl set css diagwait 13 –force<br />

• This will delay the OPROCD reboot time to 10secs from 0.5secs during node<br />

eviction/reboot, just enough to write the log/trace files for future diagnosis. Metalink<br />

note# 559365.1<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


60<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

ASM configurations<br />

AIX parameters<br />

– Async I/O needs to be enabled, but default values may be used<br />

ASM instance parameters<br />

– ASM_POWER_LIMIT=1<br />

Makes ASM rebalancing a low-priority operation. May be changed dynamically. It is<br />

common to set this value to 0, then increase to a higher value during maintenance<br />

windows<br />

– PROCESSES=25+ 15n, where n=# of instances using ASM<br />

DB instance parameters<br />

– disk_asynch_io=TRUE<br />

– filesystemio_options=ASYNCH<br />

– Increase Processes by 16<br />

– Increase Large_Pool by 600k<br />

– Increase Shared_Pool by [(1M per 100GB of usable space) + 2M]<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


Agenda<br />

61<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– CPU<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


62<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Network Options (no) Parameters<br />

– Set sb_max >= 1 MB (1048576)<br />

– Set tcp_sendspace >= 262144<br />

– Set tcp_recvspace >= 262144<br />

– Set rfc1323=1<br />

If isno=1, check to see if settings have been overridden at the network<br />

interface level:<br />

$ no -a | grep isno use_isno=1<br />

use_isno=1<br />

$ lsattr -E -l en0 -H<br />

attribute value description<br />

rfc1323 N/A<br />

tcp_nodelay N/A<br />

tcp_sendspace N/A<br />

tcp_recvspace N/A<br />

tcp_mssdflt N/A<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


63<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Additional Network (no) Parameters for RAC:<br />

� Set udp_sendspace = db_block_size *<br />

db_file_multiblock_read_count<br />

(not less than 65536)<br />

� Set udp_recvspace = 10 * udp_sendspace<br />

– Must be < sb_max<br />

� Increase if buffer overflows occur<br />

� Ipqmaxlen=512 for GPFS environments<br />

� Use Jumbo Frames if supported at the switch layer<br />

Examples:<br />

� no -a |grep udp_sendspace<br />

� no –o -p udp_sendspace=65536<br />

� netstat -s |grep "socket buffer overflows"<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


Agenda<br />

64<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

� AIX Configuration Best Practices for Oracle<br />

– Memory<br />

– I/O<br />

– Network<br />

– Miscellaneous<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


65<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Miscellaneous parameters<br />

� User Limits (smit chuser)<br />

– Soft FILE size = -1 (Unlimited)<br />

– Soft CPU time = -1 (Unlimited)<br />

– Soft DATA segment = -1 (Unlimited)<br />

– Soft STACK size -1 (Unlimited)<br />

– /etc/security/limits<br />

� Maximum number of PROCESSES allowed per user (smit chgsys)<br />

– maxuproc >= 2048<br />

� Environment variables:<br />

– AIXTHREAD_SCOPE=S<br />

– LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STATSPACK=64K oracle*<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009


71<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

© 2009 <strong>IBM</strong> Corporation<br />

Q&A<br />

April 17, 2009


72<br />

<strong>IBM</strong> Advanced Technical Support - Americas<br />

Trademarks<br />

The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. For a complete list of <strong>IBM</strong> Trademarks, see www.ibm.com/legal/copytrade.shtml: AS/400,<br />

DBE, e-business logo, ESCO, eServer, FICON, <strong>IBM</strong>, <strong>IBM</strong> Logo, iSeries, MVS, OS/390, pSeries, RS/6000, S/30, VM/ESA, VSE/ESA, Websphere, xSeries, z/OS, zSeries, z/VM<br />

The following are trademarks or registered trademarks of other companies<br />

Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation<br />

Java and all Java-related trademarks and logos are trademarks of Sun Microsystems, Inc., in the United States and other countries<br />

LINUX is a registered trademark of Linux Torvalds<br />

UNIX is a registered trademark of The Open Group in the United States and other countries.<br />

Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation.<br />

SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC.<br />

Intel is a registered trademark of Intel Corporation<br />

* All other products may be trademarks or registered trademarks of their respective companies.<br />

NOTES:<br />

Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard <strong>IBM</strong> benchmarks in a controlled environment. The actual throughput that any user will experience will<br />

vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be<br />

given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.<br />

<strong>IBM</strong> hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.<br />

All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used <strong>IBM</strong> products and the results they may have achieved. Actual<br />

environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.<br />

This publication was produced in the United States. <strong>IBM</strong> may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice.<br />

Consult your local <strong>IBM</strong> business contact for information on the product or services available in your area.<br />

All statements regarding <strong>IBM</strong>'s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.<br />

Information about non-<strong>IBM</strong> products is obtained from the manufacturers of those products or their published announcements. <strong>IBM</strong> has not tested those products and cannot confirm the performance, compatibility, or<br />

any other claims related to non-<strong>IBM</strong> products. Questions on the capabilities of non-<strong>IBM</strong> products should be addressed to the suppliers of those products.<br />

Prices subject to change without notice. Contact your <strong>IBM</strong> representative or Business Partner for the most current pricing in your geography.<br />

References in this document to <strong>IBM</strong> products or services do not imply that <strong>IBM</strong> intends to make them available in every country.<br />

Any proposed use of claims in this presentation outside of the United States must be reviewed by local <strong>IBM</strong> country counsel prior to such use.<br />

The information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. <strong>IBM</strong> may<br />

make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.<br />

Any references in this information to non-<strong>IBM</strong> Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the<br />

materials for this <strong>IBM</strong> product and use of those Web sites is at your own risk.<br />

© 2009 <strong>IBM</strong> Corporation<br />

April 17, 2009

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!