28.06.2014 Views

Performance Tuning Siebel Software on the Sun Platform

Performance Tuning Siebel Software on the Sun Platform

Performance Tuning Siebel Software on the Sun Platform

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g><br />

<strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong><br />

Khader Mohiuddin<br />

Engineering Lead <strong>Sun</strong>-<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Alliance<br />

Market Development Engineering<br />

<strong>Sun</strong> Microsystems, Inc.<br />

June 2006


Copyright 2006 <strong>Sun</strong> Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights<br />

reserved.<br />

U.S. Government Rights - Commercial software. Government users are subject to <strong>the</strong> <strong>Sun</strong> Microsystems, Inc. standard<br />

license agreement and applicable provisi<strong>on</strong>s of <strong>the</strong> FAR and its supplements. Use is subject to license terms.<br />

This distributi<strong>on</strong> may include materials developed by third parties.<br />

Parts of <strong>the</strong> product may be derived from Berkeley BSD systems, licensed from <strong>the</strong> University of California. UNIX is a<br />

registered trademark in <strong>the</strong> U.S. and in o<strong>the</strong>r countries, exclusively licensed through X/Open Company, Ltd. X/Open<br />

is a registered trademark of X/Open Company, Ltd.<br />

<strong>Sun</strong>, <strong>Sun</strong> Microsystems, <strong>the</strong> <strong>Sun</strong> logo, Solaris, <strong>Sun</strong> Fire, <strong>Sun</strong> Enterprise, StorEdge, Java, and “The Network<br />

Is The Computer” are trademarks or registered trademarks of <strong>Sun</strong> Microsystems, Inc. in <strong>the</strong> U.S. and o<strong>the</strong>r countries.<br />

All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC Internati<strong>on</strong>al,<br />

Inc. in <strong>the</strong> U.S. and o<strong>the</strong>r countries. Products bearing SPARC trademarks are based up<strong>on</strong> architecture developed by<br />

<strong>Sun</strong> Microsystems, Inc.<br />

This product is covered and c<strong>on</strong>trolled by U.S. Export C<strong>on</strong>trol laws and may be subject to <strong>the</strong> export or import laws in<br />

o<strong>the</strong>r countries. Nuclear, missile, chemical biological weap<strong>on</strong>s or nuclear maritime end uses or end users, whe<strong>the</strong>r<br />

direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities<br />

identified <strong>on</strong> U.S. export exclusi<strong>on</strong> lists, including, but not limited to, <strong>the</strong> denied pers<strong>on</strong>s and specially designated<br />

nati<strong>on</strong>als lists is strictly prohibited.<br />

DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,<br />

REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF<br />

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE<br />

DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY<br />

INVALID.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 2


Abstract<br />

This paper discusses <strong>the</strong> performance optimizati<strong>on</strong> of a complete <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise<br />

soluti<strong>on</strong> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> platform. The article covers tuning for <strong>the</strong> Solaris TM Operating<br />

System, <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> software, Oracle database server, <strong>Sun</strong> StorEdge TM products, and <strong>Sun</strong><br />

Java TM System Web Server. We also discuss unique features of <strong>the</strong> Solaris OS that<br />

reduce risk while helping to improve <strong>the</strong> performance and stability of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s.<br />

All of <strong>the</strong> techniques described here are less<strong>on</strong>s learned from a series of performance<br />

tuning studies that were c<strong>on</strong>ducted under <strong>the</strong> auspices of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>Platform</strong> Sizing and<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> Program (PSPP).<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 3


1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for Price/<str<strong>on</strong>g>Performance</str<strong>on</strong>g>: Summary...................................................................................7<br />

2 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Applicati<strong>on</strong> Architecture Overview..................................................................................9<br />

3 Optimal <strong>Sun</strong>/<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Architecture for Benchmark Workload..................................................11<br />

3.1 Hardware and <str<strong>on</strong>g>Software</str<strong>on</strong>g> Used................................................................................................ 13<br />

4 Workload Descripti<strong>on</strong>................................................................................................................. 14<br />

4.1 OLTP (<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Thin Client End Users)............................................................................14<br />

4.2 Batch Server Comp<strong>on</strong>ents...................................................................................................... 14<br />

5 10,000 C<strong>on</strong>current Users: Test Results Summary.................................................................... 15<br />

5.1 Resp<strong>on</strong>se Times and Transacti<strong>on</strong> Throughput....................................................................... 15<br />

5.2 Server Resource Utilizati<strong>on</strong> ...................................................................................................16<br />

6 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Scalability <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong>.......................................................................................17<br />

7 <str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>....................................................................................................................20<br />

7.1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Solaris OS for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server.......................................................................................21<br />

7.1.1 Solaris MTmalloc <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>............................................................................... 21<br />

7.1.2 Solaris Alternate Threads Library Usage.........................................................................22<br />

7.1.3 The Solaris Kernel and TCP/IP <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Parameters for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server............................ 23<br />

7.2 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server for <strong>the</strong> Solaris OS.................................................................................23<br />

7.2.1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Call Center, Sales/Service, and eChannel <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Modules................................. 24<br />

7.2.2 Workflow .......................................................................................................................26<br />

7.2.3 Assignment Manager <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>...........................................................................................27<br />

7.2.4 EAI-MQseries..................................................................................................................28<br />

7.2.5 EAI-HTTP Adapter..........................................................................................................29<br />

7.3 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Scalability Limitati<strong>on</strong>s and Soluti<strong>on</strong>s.............................................................. 30<br />

7.3.1 The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> MaxTasks Upper Limit Problem................................................................... 30<br />

7.3.2 Bloated <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Processes (Comm<strong>on</strong>ly Mistaken as Memory Leaks)...............................34<br />

7.4 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>Sun</strong> Java System Web Server......................................................................................36<br />

7.5 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Server Extensi<strong>on</strong> (SWSE)..................................................................38<br />

7.6 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Standard Oracle Database and <strong>Sun</strong> Storage.................................................. 39<br />

7.6.1 Optimal Database C<strong>on</strong>figurati<strong>on</strong>..................................................................................... 39<br />

7.6.2 Properly Locating Data <strong>on</strong> <strong>the</strong> Disk for Best <str<strong>on</strong>g>Performance</str<strong>on</strong>g>..............................................40<br />

7.6.3 Disk Layout and Oracle Data Partiti<strong>on</strong>ing.......................................................................41<br />

7.6.4 Solaris MPSS <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for Oracle Server..........................................................................44<br />

7.6.5 Hot Table <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> and Data Growth................................................................................ 47<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 4


7.6.6 Oracle Parameters <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>................................................................................................47<br />

7.6.7 Solaris Kernel Parameters <strong>on</strong> Oracle Database Server.................................................... 50<br />

7.6.8 SQL Query <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>...........................................................................................................50<br />

7.6.9 Rollback Segment <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>............................................................................................... 53<br />

7.6.10 Database C<strong>on</strong>nectivity Using Host Names Adapter...................................................... 53<br />

7.6.11 High I/O with Oracle Shadow Processes C<strong>on</strong>nected to <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>......................................54<br />

7.7 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Database C<strong>on</strong>necti<strong>on</strong> Pooling.....................................................................................55<br />

7.8 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>Sun</strong> Java System Directory Server (LDAP)................................................................56<br />

8 <str<strong>on</strong>g>Performance</str<strong>on</strong>g> Tweaks with No Gains..........................................................................................57<br />

9 Scripts, Tips, and Tricks for Diagnosing <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong>.................................... 59<br />

9.1 To M<strong>on</strong>itor <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Open Sessi<strong>on</strong> Statistics.............................................................................59<br />

9.2 To List <strong>the</strong> Parameter Settings for a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server................................................................ 59<br />

9.3 To Find Out All OMs Currently Running for a Comp<strong>on</strong>ent.................................................. 59<br />

9.4 To Find Out <strong>the</strong> Number of Active Servers for a Comp<strong>on</strong>ent................................................ 60<br />

9.5 To Find Out <strong>the</strong> Tasks for a Comp<strong>on</strong>ent................................................................................ 60<br />

9.6 To Set Detailed Trace Levels <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Comp<strong>on</strong>ents (<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OM).......................60<br />

9.7 To Find Out <strong>the</strong> Number of GUEST Logins for a Comp<strong>on</strong>ent...............................................60<br />

9.8 To Calculate <strong>the</strong> Memory Usage for an OM..........................................................................61<br />

9.9 To Find <strong>the</strong> Log File Associated with a Specific OM.............................................................61<br />

9.10 To Produce a Stack Trace for <strong>the</strong> Current Thread of an OM.............................................. 62<br />

9.11 To Show System-Wide Lock C<strong>on</strong>tenti<strong>on</strong> Issues Using lockstat............................................ 62<br />

9.12 To Show <strong>the</strong> Lock Statistic of an OM Using plockstat ........................................................ 64<br />

9.13 To “Truss” an OM............................................................................................................... 65<br />

9.14 How to Trace <strong>the</strong> SQL Statements for a Particular <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Transacti<strong>on</strong>............................. 65<br />

9.15 Changing Database C<strong>on</strong>nect Stringvi $ODBCINI and Editing <strong>the</strong> field .ServerName....... 66<br />

9.16 Enabling/Disabling <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Comp<strong>on</strong>ents...............................................................................66<br />

10 Appendix A: Transacti<strong>on</strong> Resp<strong>on</strong>se Times............................................................................. 68<br />

11 Appendix B: Database Objects Growth During <strong>the</strong> Test.......................................................74<br />

12 Appendix C: Oracle statspack Report ....................................................................................77<br />

13 References...................................................................................................................................79<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 5


Introducti<strong>on</strong><br />

To ensure that <strong>the</strong> most demanding global enterprise customers can meet <strong>the</strong>ir<br />

deployment requirements, engineers from <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems and <strong>Sun</strong> are working jointly <strong>on</strong><br />

several engineering projects. Their comm<strong>on</strong> goal is to fur<strong>the</strong>r enhance <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server<br />

performance <strong>on</strong> <strong>Sun</strong>’s highly-scalable Solaris Operating System.<br />

This article is an effort to document and spread knowledge of tuning and optimizing<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 eBusiness Applicati<strong>on</strong>s Suite <strong>on</strong> <strong>the</strong> Solaris platform. All of <strong>the</strong> techniques<br />

discussed here are less<strong>on</strong>s learned from a series of performance tuning studies<br />

c<strong>on</strong>ducted under <strong>the</strong> auspices of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>Platform</strong> Sizing and <str<strong>on</strong>g>Performance</str<strong>on</strong>g> Program<br />

(PSPP). The tests c<strong>on</strong>ducted under this program are based <strong>on</strong> real-world scenarios<br />

derived from <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems customers, which reflect some of <strong>the</strong> most frequently used<br />

and most critical comp<strong>on</strong>ents of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> eBusiness Applicati<strong>on</strong>s Suite. This article also<br />

provides tips and best practices--based <strong>on</strong> our experience--for field staff, benchmark<br />

engineers, system administrators, and customers who are interested in achieving optimal<br />

performance and scalability with <strong>the</strong>ir <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>-<strong>on</strong>-<strong>Sun</strong> installati<strong>on</strong>s. The following areas are<br />

addressed in this paper:<br />

• What are <strong>the</strong> unique features of <strong>the</strong> Solaris Operating System that reduce risk<br />

while helping to improve <strong>the</strong> performance and stability of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s?<br />

• For maximum scalability at a low cost, what is <strong>the</strong> optimal way to c<strong>on</strong>figure <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

<strong>on</strong> <strong>the</strong> Solaris OS?<br />

• How does <strong>Sun</strong>’s Chip Multithreading (CMT) technology based <strong>on</strong> <strong>the</strong> UltraSPARC<br />

IV processor benefit <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> soluti<strong>on</strong>s?<br />

• How can transacti<strong>on</strong> resp<strong>on</strong>se times be improved for end users in large <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>-<strong>on</strong>-<br />

<strong>Sun</strong> deployments?<br />

• How can an Oracle database running <strong>on</strong> <strong>Sun</strong> StorEdge be tuned for higher<br />

performance for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> software?<br />

The performance and scalability testing was c<strong>on</strong>ducted at <strong>Sun</strong>’s Enterprise Technology<br />

Center (ETC) in Menlo Park, California, by <strong>Sun</strong>’s Market Development Engineering<br />

(MDE) with assistance from <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems. The ETC is a massive, distributed testing<br />

facility packing more computer power than many Fortune 1000 corporati<strong>on</strong>s. A facility of<br />

this magnitude provides <strong>the</strong> resources needed to test <strong>the</strong> limits of software <strong>on</strong> a much<br />

greater scale than most enterprises will ever require.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 6


1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for Price/<str<strong>on</strong>g>Performance</str<strong>on</strong>g>: Summary<br />

The Solaris Operating System is <strong>the</strong> cornerst<strong>on</strong>e software technology that enables <strong>Sun</strong><br />

to deliver high performance and scalability for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s; and <strong>the</strong> OS c<strong>on</strong>tains a<br />

number of features that enable customers to tune for optimal price/performance levels.<br />

Am<strong>on</strong>g <strong>the</strong> core features of <strong>the</strong> Solaris OS that c<strong>on</strong>tributed to <strong>the</strong> superior results<br />

achieved in <strong>the</strong> tests:<br />

Solaris MTmalloc: A standard feature with <strong>the</strong> Solaris system was enabled <strong>on</strong> <strong>the</strong><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong> servers. This is an alternate memory allocator module that was built<br />

specifically for multithreaded applicati<strong>on</strong>s such as <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>. The MTmalloc routines<br />

provide a faster, c<strong>on</strong>current malloc implementati<strong>on</strong>. This feature resulted in lowering<br />

CPU c<strong>on</strong>sumpti<strong>on</strong> by 35%. Though memory c<strong>on</strong>sumpti<strong>on</strong> doubled as a side effect,<br />

overall price/performance benefits are positive. There have been improvements made <strong>on</strong><br />

MTmalloc in Solaris 10 OS which reduce <strong>the</strong> space-inefficiency penalty. More details <strong>on</strong><br />

this topic are available in Secti<strong>on</strong> 7.1.1.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Process Size: For an applicati<strong>on</strong> process running <strong>on</strong> <strong>the</strong> Solaris OS, <strong>the</strong> default<br />

setting for stack and data size is unlimited. We found that <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> software running with<br />

<strong>the</strong> default Solaris system setting caused bloated stack size and runaway processes that<br />

compromised scalability and stability of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> Solaris OS. Limiting <strong>the</strong> stack size<br />

to 1MB and increasing <strong>the</strong> data size limit to 4GB resulted in increased scalability and<br />

higher stability. Both adjustments let a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> process use its process address space<br />

more efficiently, fully utilizing <strong>the</strong> total process address space of 4GB available to a 32-bit<br />

applicati<strong>on</strong> process. A significant drop in failure rate of transacti<strong>on</strong>s (<strong>on</strong>ly eight failures<br />

out of 1.2 milli<strong>on</strong> total transacti<strong>on</strong>s!) was observed as a result of <strong>the</strong>se two changes.<br />

More details <strong>on</strong> this topic are available in Secti<strong>on</strong> 7.3.<br />

Solaris Alternate Threads Library: Solaris 8 OS provides an alternate threads<br />

implementati<strong>on</strong> with a <strong>on</strong>e-level model (1x1) in which user-level threads are associated<br />

<strong>on</strong>e-to-<strong>on</strong>e with lightweight processes (LWPs). This implementati<strong>on</strong> is simpler than <strong>the</strong><br />

standard (MxN) two-level model in which user-level threads are multiplexed over<br />

(possibly) fewer LWPs. When used <strong>on</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Applicati<strong>on</strong> Servers, <strong>the</strong> 1x1 model<br />

provided good performance improvements to <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> multithreaded applicati<strong>on</strong>s. More<br />

details <strong>on</strong> this topic are available in Secti<strong>on</strong> 7.1.2.<br />

Solaris Multiple Page Size Support (MPSS): A standard feature available in Solaris 9<br />

OS and subsequent versi<strong>on</strong>s, MPSS gives applicati<strong>on</strong>s <strong>the</strong> ability to run <strong>on</strong> <strong>the</strong> same OS<br />

with more than <strong>on</strong>e page size. Use of this Solaris library allows a larger heap size to be<br />

set for certain applicati<strong>on</strong>s, which results in better performance due to reduced TLB<br />

miss%. This feature was enabled <strong>on</strong> <strong>the</strong> Oracle server and resulted in performance<br />

gains. More details <strong>on</strong> this topic are available in Secti<strong>on</strong> 7.6.4.<br />

Solaris Resource Manager was used <strong>on</strong> all tiers of <strong>the</strong> setup to efficiently manage CPU<br />

resources. This resulted in fewer process migrati<strong>on</strong>s and translated to higher cache hits.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 7


<strong>Sun</strong> Storage: Oracle database files were laid out <strong>on</strong> a <strong>Sun</strong> StorEdge SE6320 system.<br />

I/O balancing was implemented <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> workload to reduce hot spots. Also z<strong>on</strong>e<br />

bit recording was used <strong>on</strong> disks to provide higher throughput to <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> transacti<strong>on</strong>s.<br />

Direct I/O was enabled <strong>on</strong> certain Oracle files and <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> file system. More details<br />

<strong>on</strong> this topic are available in Secti<strong>on</strong> 7.6.2.<br />

C<strong>on</strong>necti<strong>on</strong> Pooling: <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>’s database c<strong>on</strong>necti<strong>on</strong> pooling was used, providing good<br />

benefits for CPU and memory. Twenty end users shared a single c<strong>on</strong>necti<strong>on</strong> to <strong>the</strong><br />

database. More details <strong>on</strong> this topic are available in Secti<strong>on</strong> 7.7.<br />

Usage of Appropriate <strong>Sun</strong> Hardware: Pilot tests were performed to characterize <strong>the</strong><br />

performance of <strong>the</strong> web, applicati<strong>on</strong>, and database across <strong>the</strong> current <strong>Sun</strong> product line.<br />

Hardware was chosen based <strong>on</strong> best price/performance ra<strong>the</strong>r than pure performance.<br />

Details <strong>on</strong> this topic are available in Secti<strong>on</strong> 3.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 8


2 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Applicati<strong>on</strong> Architecture Overview<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server is a flexible and scalable applicati<strong>on</strong> server platform that supports a variety<br />

of services operating <strong>on</strong> <strong>the</strong> middle tier of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> N-tier architecture, including data<br />

integrati<strong>on</strong>, workflow, data replicati<strong>on</strong>, and synchr<strong>on</strong>izati<strong>on</strong> service for mobile clients.<br />

Figure 2.1 provides a high-level view of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong> suite architecture.<br />

Figure 2.1<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server includes business logic and infrastructure for running <strong>the</strong> different CRM<br />

modules as well as c<strong>on</strong>nectivity interfaces to <strong>the</strong> back end database. It c<strong>on</strong>sists of<br />

several multithreaded processes comm<strong>on</strong>ly referred to as ‘<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Object Managers’.<br />

These can be c<strong>on</strong>figured so that several instances of it can run <strong>on</strong> a single Solaris<br />

machine. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.x server makes use of gateway comp<strong>on</strong>ents to track user<br />

sessi<strong>on</strong>s.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.x has a thin client architecture for c<strong>on</strong>nected clients. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.x thin client<br />

architecture is enabled through <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> plug-in (SWSE – <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Webserver Extensi<strong>on</strong>)<br />

running <strong>on</strong> <strong>the</strong> web server. It's <strong>the</strong> primary interface between <strong>the</strong> client and <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 9


applicati<strong>on</strong> server. For more informati<strong>on</strong> <strong>on</strong> <strong>the</strong> individual <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> comp<strong>on</strong>ents please refer<br />

to <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> product documentati<strong>on</strong> at www.siebel.com.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 10


3 Optimal <strong>Sun</strong>/<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Architecture for Benchmark Workload<br />

<strong>Sun</strong> offers a wide variety of products ranging from hardware and networks to software<br />

and storage systems. To obtain <strong>the</strong> best price/performance from an applicati<strong>on</strong>, <strong>on</strong>e<br />

needs to determine which <strong>Sun</strong> products are appropriate. This selecti<strong>on</strong> process can be<br />

achieved by understanding <strong>the</strong> applicati<strong>on</strong>'s characteristics, picking <strong>Sun</strong> products suitable<br />

to those applicati<strong>on</strong> characteristics, c<strong>on</strong>ducting a series of tests, and <strong>the</strong>n finalizing <strong>the</strong><br />

choice of machines.<br />

For this project, tests were d<strong>on</strong>e to characterize web, applicati<strong>on</strong>, and database<br />

performance across <strong>the</strong> current <strong>Sun</strong> product line. Hardware was selected based <strong>on</strong> best<br />

price/performance ra<strong>the</strong>r than pure performance criteria. Figure 3.1 illustrates <strong>the</strong><br />

hardware c<strong>on</strong>figurati<strong>on</strong> used in <strong>the</strong> <strong>Sun</strong> ETC testing. (Note: "SF" stands for <strong>Sun</strong> Fire<br />

server.)<br />

Load<br />

Generators<br />

SF V65x<br />

Web Servers<br />

SF V440/SF V880<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Servers<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Gateway<br />

<strong>Sun</strong> Java<br />

System Directory<br />

Server<br />

SF V890<br />

SF V440<br />

SF E2900<br />

SF E2900<br />

SF V440<br />

SF V440<br />

SF<br />

V480<br />

<strong>Sun</strong> StorEdge<br />

SE 6320<br />

Key:<br />

Network traffic between load generators and web servers<br />

Network packets between web servers to app servers<br />

Point-Point Gbic between each app server and database server<br />

Figure 3.1<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 11


After collecting detailed knowledge of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>'s performance characteristics, <strong>the</strong><br />

hardware/network topology depicted in Figure 3.1 was created. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> end user<br />

workload (OLTP) was distributed across three nodes: <strong>Sun</strong> Fire TM V890, <strong>Sun</strong> Fire E2900,<br />

and <strong>Sun</strong> Fire V440 servers. Each node ran all <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> comp<strong>on</strong>ents under test, that is,<br />

Call Center, eService, eSales, and eChannel. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server comp<strong>on</strong>ent jobs (batch tests<br />

EAI-HTTP, EAI-MQ, WorkFlow, and HTTP-adapter) were distributed across <strong>Sun</strong> Fire<br />

V440 and <strong>Sun</strong> Fire V480 servers. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> gateway and <strong>the</strong> <strong>Sun</strong> Java System<br />

Directory Server (LDAP) were intenti<strong>on</strong>ally placed <strong>on</strong> <strong>on</strong>e physical machine (<strong>the</strong> <strong>Sun</strong> Fire<br />

V440 server) because <strong>the</strong>y are very low c<strong>on</strong>sumers of CPU and memory resources. With<br />

<strong>the</strong> <strong>Sun</strong> E<strong>the</strong>rnet GIG c<strong>on</strong>nectivity, network throughput was adequate. Each of <strong>the</strong><br />

machines had three network interface cards (NICs), which we used to isolate <strong>the</strong> main<br />

categories of network traffic:<br />

1. End-user (load generator)-to-web-server traffic (shown in green)<br />

2. Web server-to-gateway-to-<str<strong>on</strong>g>Siebel</str<strong>on</strong>g>-applicati<strong>on</strong>s-server traffic (shown in black)<br />

3. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>-applicati<strong>on</strong>s-server-to-database-server traffic (shown in red)<br />

The networking was d<strong>on</strong>e using a Cisco router (Catalyst 4000). Two VLANs were created<br />

to separate network traffic between (1) and (2), while (3) was fur<strong>the</strong>r optimized with<br />

individual point-to-point network interfaces from each applicati<strong>on</strong> server to <strong>the</strong> database.<br />

This separati<strong>on</strong> was d<strong>on</strong>e to alleviate any possible network bottlenecks at any tier which<br />

could have occurred as a result of simulating thousands of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users. The load<br />

generators were all <strong>Sun</strong> Fire V65 servers running Mercury LoadRunner software. The<br />

load was spread across three web server machines by directing different kinds of users<br />

(such as Call Center or eService users, and so <strong>on</strong>) to <strong>the</strong> three web servers.<br />

All of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong> servers bel<strong>on</strong>ged to a single <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise. A single<br />

E2900 server hosted <strong>the</strong> Oracle database; this was c<strong>on</strong>nected to a <strong>Sun</strong> StorEdge TM<br />

SE6320 system using a fiber channel.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 12


3.1 Hardware and <str<strong>on</strong>g>Software</str<strong>on</strong>g> Used<br />

Gateway Server/LDAP:<br />

• 1 x <strong>Sun</strong> Fire V440<br />

o 1 x 1.2 GHz UltraSPARC IIIi<br />

o 16 GB RAM<br />

o Solaris 8 OS 2/04 Generic_117350-02<br />

o <strong>Sun</strong> Java System Directory Server<br />

LDAP 4.1 SP9<br />

Applicati<strong>on</strong> Servers:<br />

• 1x <strong>Sun</strong> Fire V890<br />

o 8 x 1.2GHz UltraSPARC IV<br />

o 32 GB RAM<br />

o Solaris 8 OS 2/04 Generic_117350-02<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2<br />

• 1 x <strong>Sun</strong> Fire E2900<br />

o 12 x 1.2 GHz UltraSPARC IV<br />

o 48 GB RAM<br />

o Solaris 8 OS 2/04 Generic_117350-02<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2<br />

• 2 x <strong>Sun</strong> Fire V440<br />

o 4 x 1.2 GHz UltraSPARC IIIi<br />

o 16 GB RAM<br />

o Solaris 8 OS 2/04 Generic_117350-02<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2<br />

• 1 x <strong>Sun</strong> Fire V480<br />

o 4 x 1.2 GHz UltraSPARC IIIi+<br />

o 16 GB RAM<br />

o Solaris 8 OS 2/02 Generic_108528-27<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2<br />

o IBM MQ Series 5.2 FP2<br />

Database Server:<br />

• 1 x <strong>Sun</strong> Fire E2900<br />

o 4 x 1.2 GHz UltraSPARC IV<br />

o 16 GB RAM<br />

o Solaris 9 OS 2/04<br />

Generic_117150-05<br />

o Oracle 9.2.0.2 32-bit<br />

o <strong>Sun</strong> StorEdge SE6320<br />

Storage Array 4 trays (2+2),<br />

4x14x36GB 15Krpm FC-AL<br />

drives.<br />

Load Runner Drivers:<br />

• 5 x <strong>Sun</strong> Fire V65x<br />

o 4 x 3.02 GHz Xe<strong>on</strong><br />

o 3 GB RAM<br />

o Windows XP SP1<br />

o Mercury LoadRunner 7.5.1<br />

Web Servers:<br />

• 2 x <strong>Sun</strong> Fire V440<br />

o 4 x1.2 GHz UltraSPARC IIIi<br />

o 16 GB RAM<br />

o Solaris 8 OS 2/04<br />

Generic_117350-02<br />

o <strong>Sun</strong> Java System Web Server<br />

6.0 SP2<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2 SWSE<br />

• 1 x <strong>Sun</strong> Fire V880<br />

o 2 x 900 MHz UltraSPARC IIIi<br />

o 16 GB RAM<br />

o Solaris 8 OS 2/02<br />

Generic_108528-13<br />

o <strong>Sun</strong> Java System Web Server<br />

6.0 SP2<br />

o <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2 SWSE<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 13


4 Workload Descripti<strong>on</strong><br />

All of <strong>the</strong> tuning discussed in this document is specific to <strong>the</strong> PSPP workload as defined<br />

by <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems. The workload was based <strong>on</strong> scenarios derived from large <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

customers to reflect some of <strong>the</strong> most frequently used and most critical comp<strong>on</strong>ents of<br />

<strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> eBusiness Applicati<strong>on</strong> Suite. At a high level <strong>the</strong> workload for <strong>the</strong>se tests can<br />

be categorized into <strong>the</strong> two categories: (1) OLTP and (2) batch server comp<strong>on</strong>ents.<br />

4.1 OLTP (<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Thin Client End Users)<br />

OLTP simulated <strong>the</strong> real world requirements of a large organizati<strong>on</strong> with 10,000<br />

c<strong>on</strong>current users involved in <strong>the</strong> following tasks and functi<strong>on</strong>s (in a mixed ratio):<br />

• Call Center (sales and service representatives) – 7000 c<strong>on</strong>current users<br />

• Partner Relati<strong>on</strong>ship Management (partner organizati<strong>on</strong>s) – eChannel,<br />

1000 c<strong>on</strong>current users<br />

• Web sales (customers) – eSales, 1000 c<strong>on</strong>current users<br />

• Web service (customers) – eService, 1000 c<strong>on</strong>current users<br />

The end users were simulated using LoadRunner versi<strong>on</strong> 7.51 SP1 from Mercury<br />

Interactive, with a think time in <strong>the</strong> range of 5 to 55 sec<strong>on</strong>ds (or an average of 30<br />

sec<strong>on</strong>ds) between user operati<strong>on</strong>s.<br />

4.2 Batch Server Comp<strong>on</strong>ents<br />

The batch comp<strong>on</strong>ent of <strong>the</strong> workload c<strong>on</strong>sisted of:<br />

1. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Assignment Manager<br />

2. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Workflow<br />

3. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> EAI MQ Series Adapter<br />

4. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> EAI-HTTP Adapter<br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Assignment Manager processed assignment transacti<strong>on</strong>s for sales<br />

opportunities based <strong>on</strong> employee positi<strong>on</strong>s and territories. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Workflow Manager<br />

executed workflow steps based <strong>on</strong> inserted service requests. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 EAI MQ<br />

Series Adapter read from and placed transacti<strong>on</strong>s into IBM MQ Series queues. The<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 EAI-HTTP Adapter executed requests between different web infrastructures.<br />

All of <strong>the</strong> tests were c<strong>on</strong>ducted by making sure that both <strong>the</strong> OLTP and batch<br />

comp<strong>on</strong>ents were run in c<strong>on</strong>juncti<strong>on</strong> for a <strong>on</strong>e hour period (steady state) within <strong>the</strong> same<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise installati<strong>on</strong>.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 14


5 10,000 C<strong>on</strong>current Users: Test Results Summary<br />

The test system dem<strong>on</strong>strated that <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 architecture <strong>on</strong> <strong>Sun</strong> Fire servers and Oracle<br />

9i database easily scales to 10,000 c<strong>on</strong>current users.<br />

● Vertical scalability. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Server showed excellent scalability within<br />

an applicati<strong>on</strong> server.<br />

●<br />

●<br />

●<br />

Horiz<strong>on</strong>tal scalability. The benchmark dem<strong>on</strong>strates scalability across<br />

multiple servers without degradati<strong>on</strong>.<br />

Low network utilizati<strong>on</strong>. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Smart Web Architecture and Smart<br />

Network Architecture efficiently managed <strong>the</strong> network c<strong>on</strong>suming <strong>on</strong>ly 5.5<br />

kbps per user.<br />

Efficient use of <strong>the</strong> database server. The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Smart Database<br />

C<strong>on</strong>necti<strong>on</strong> Pooling and Multiplexing allowed <strong>the</strong> database to service<br />

10,000 c<strong>on</strong>current users and <strong>the</strong> supporting <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Server applicati<strong>on</strong><br />

services with 480 database c<strong>on</strong>necti<strong>on</strong>s.<br />

The actual results of <strong>the</strong> performance and scalability tests c<strong>on</strong>ducted at <strong>the</strong> <strong>Sun</strong> ETC for<br />

<strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> workload are summarized in <strong>the</strong> following secti<strong>on</strong> of <strong>the</strong> article. Chapter 8<br />

resents specific performance tuning tips and <strong>the</strong> methodology used to achieve this level<br />

of performance.<br />

5.1 Resp<strong>on</strong>se Times and Transacti<strong>on</strong> Throughput<br />

Workload<br />

Number of Average Business Workload<br />

Business<br />

Users Operati<strong>on</strong><br />

Resp<strong>on</strong>se<br />

Time (Sec)<br />

Transacti<strong>on</strong>s<br />

Throughput/<br />

Hour<br />

Transacti<strong>on</strong>s<br />

Throughput/<br />

Hour<br />

Call Center (Sales and Service) 7,000 .126 34,778 Assignment Manager 11,427<br />

Partner Relati<strong>on</strong>ship Management 1,000 .303 12,540 EAI - HTTP Adapter 278,352<br />

eSales<br />

EAI - MQ Series<br />

1,000 .274 6,775 Adapter 181,319<br />

eService 1,000 .199 12,870 Workflow Manager 53,439<br />

Totals 10,000 N/A 66,964<br />

Table 5.1<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 15


5.2 Server Resource Utilizati<strong>on</strong><br />

Figure 5.2<br />

Node Functi<strong>on</strong>al Use % CPU<br />

Utilizati<strong>on</strong><br />

1 x <strong>Sun</strong> Fire E2900 (4CPU,16GB) Oracle Database Server 51 1690<br />

1 x <strong>Sun</strong> Fire V480 (4CPU,16GB) <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Server – EAI-HTTP + WorkFlow 65 2075<br />

1 x <strong>Sun</strong> Fire V440 (4CPU, 16GB) <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Server – AM + EAI MQ series 37 1742<br />

1 x <strong>Sun</strong> Fire V440 (4CPU, 16GB) <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Server – 1600 end users 91 10,666<br />

1 x <strong>Sun</strong> Fire E2900 (12CPU, 48GB) <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Server – 4800 end users 66 38,956<br />

1x <strong>Sun</strong> Fire V890 (8CPU,32GB) <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> App Server – 3600 end users 68 25,871<br />

<strong>Sun</strong> Java System Web Server – HTTP Adapter,<br />

8<br />

1x <strong>Sun</strong> Fire V440 (4CPU, 16GB) WorkFlow<br />

126<br />

1 x <strong>Sun</strong> Fire V880 (4CPU,16GB) <strong>Sun</strong> Java System Web Server – Applicati<strong>on</strong> requests 49 186<br />

1 x <strong>Sun</strong> Fire V440 (4CPU, 16GB) <strong>Sun</strong> Java System Web Server – Applicati<strong>on</strong> requests 54 225<br />

1 x <strong>Sun</strong> Fire V440 (4CPU, 16GB)<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Gateway Server/<strong>Sun</strong> Java System Directory<br />

11 81<br />

Server<br />

Table 5.2<br />

Memory<br />

Utilizati<strong>on</strong> (MB)<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 16


6 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Scalability <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong><br />

On <strong>the</strong> <strong>Sun</strong> platform <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong> scales to high c<strong>on</strong>current users extremely<br />

well. With <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>’s flexible distributed architecture and <strong>Sun</strong>’s large server product line, an<br />

optimal <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>-<strong>on</strong>-<strong>Sun</strong> deployment can be achieved ei<strong>the</strong>r with several small (<strong>on</strong>e to four<br />

CPU) machines or with a single large <strong>Sun</strong> machine (such as a E15K, E6900, and so <strong>on</strong>).<br />

The following graphs depict <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> scalability <strong>on</strong> different <strong>Sun</strong> machine types. All <strong>the</strong><br />

tuning that was applied to achieve <strong>the</strong>se results (as documented in <strong>the</strong> next chapter in<br />

this article) can be applied <strong>on</strong> producti<strong>on</strong> systems.<br />

Figure 6.1<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 17


Figure 6.2<br />

The V440 server uses UltraSPARC III chips while <strong>the</strong> V890 and E2900 use UltraSPARC<br />

IV chips. Figure 6.2 shows <strong>the</strong> difference in scalability between <strong>the</strong> classes of <strong>Sun</strong><br />

machines. Customers can this data for <strong>the</strong> capacity planning and sizing of <strong>the</strong>ir real world<br />

deployments. Keep in mind that <strong>the</strong> workload used for <strong>the</strong>se results should be identical to<br />

<strong>the</strong> customer real-world deployment or appropriate adjustments need to be made to <strong>the</strong><br />

server sizing. For more workload details, please see Secti<strong>on</strong> 4.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 18


Cost per <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> User <strong>on</strong> <strong>Sun</strong> <strong>Platform</strong><br />

Figure 6.3<br />

*Note:$$/user number is based purely <strong>on</strong> hardware cost and does not include envir<strong>on</strong>mental factors,<br />

facilities, service, or management.<br />

<strong>Sun</strong> servers provide <strong>the</strong> best price/performance for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s. Figure 6.3<br />

depicts <strong>the</strong> costs of <strong>the</strong> typical <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> user <strong>on</strong> various models of <strong>Sun</strong> servers tested.<br />

Price/<str<strong>on</strong>g>Performance</str<strong>on</strong>g> Summary Per Tier of <strong>the</strong> <strong>Sun</strong>-<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Deployment<br />

<br />

<br />

<br />

<br />

<br />

Applicati<strong>on</strong> tier:<br />

<strong>Sun</strong> Fire V440: 440 users/CPU ($17.57/user)<br />

<strong>Sun</strong> Fire V890: 662 users/CPU ($23.42/user)<br />

<strong>Sun</strong> Fire E2900: 606 users/CPU ($37.54/user)<br />

Database tier (<strong>Sun</strong> Fire E2900):<br />

4902 users/CPU ($5.46/user)<br />

Web tier (<strong>Sun</strong> Fire V440):<br />

2453 users/CPU ($3.15/user)<br />

Average resp<strong>on</strong>se time: from 0.126 to 0.303 sec (comp<strong>on</strong>ent-specific)<br />

Success rate: > 99.999% (8 failures out of ~1.2 milli<strong>on</strong> transacti<strong>on</strong>s)<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 19


7 <str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

Many people think solving a performance problem is a mysterious talent, however <strong>the</strong>re<br />

is a particular methodology at work. Figure 8.1 shows <strong>the</strong> process flow of approaching a<br />

performance problem or simply tuning <strong>the</strong> system for best performance.<br />

Figure 7<br />

Use of a methodology like <strong>the</strong> process shown in Figure 7.1 can take <strong>the</strong> black magic out<br />

of performance tuning. because <strong>the</strong>re are several books and white papers dedicated to<br />

<strong>the</strong> subject of performance tuning methodologies, <strong>the</strong> subject is not being covered in<br />

detail in this white paper. The scope of this white paper is to provide <strong>the</strong> reader with<br />

specific tuneables for <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>/<strong>Sun</strong> platform. The current chapter provides specific<br />

suggesti<strong>on</strong>s for performance and scalability tuning. These suggesti<strong>on</strong>s are based <strong>on</strong><br />

less<strong>on</strong>s learned from <strong>the</strong> tests c<strong>on</strong>ducted at <strong>the</strong> <strong>Sun</strong> ETC and through years of<br />

collaborative engineering between <strong>Sun</strong> and <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 20


7.1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Solaris OS for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server<br />

7.1.1 Solaris MTmalloc <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

The alternate memory allocator module, which is standard to <strong>the</strong> Solaris OS and was<br />

built specifically for multithreaded applicati<strong>on</strong>s such as <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>, was enabled <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

applicati<strong>on</strong> servers. The MTmalloc routines provide a faster, c<strong>on</strong>current malloc<br />

implementati<strong>on</strong>. This feature resulted in lowering CPU c<strong>on</strong>sumpti<strong>on</strong> by 35%. Though<br />

memory c<strong>on</strong>sumpti<strong>on</strong> doubled as a side effect, overall price/performance benefits were<br />

positive. In Solaris 10 OS, improvements have been made <strong>on</strong> MTmalloc to reduce <strong>the</strong><br />

space-inefficiency cost.<br />

Effect of MTmalloc <strong>on</strong> CPU Benefit and Memory Cost<br />

Figure 7.1.1<br />

In Figure7.1.1, <strong>the</strong> blue curve shows <strong>the</strong> percentage reducti<strong>on</strong> in CPU usage for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

applicati<strong>on</strong>s running <strong>on</strong> different <strong>Sun</strong> Fire SMP machines with MTmalloc enabled . The<br />

red curve shows <strong>the</strong> corresp<strong>on</strong>ding increase in memory usage due to use of MTmalloc.<br />

As is apparent from Figure 7.1.1, enabling MTmalloc <strong>on</strong> a 4 CPU machine will not be<br />

beneficial; <strong>the</strong>re is 51% increase in memory usage by <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> while CPU utilizati<strong>on</strong><br />

remains <strong>the</strong> same. <str<strong>on</strong>g>Performance</str<strong>on</strong>g> gains begin to get better when MTmalloc is used <strong>on</strong><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server running <strong>on</strong> 8 CPU machines and upwards. On an 8 CPU machine (V890)<br />

<strong>on</strong>e can expect 28% reducti<strong>on</strong> in CPU utilizati<strong>on</strong> when using <strong>the</strong> MTmalloc feature.<br />

Though memory usage by <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong> increases when MTmalloc is tuned,<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 21


this is not a big disadvantage, as <strong>the</strong> standard <strong>Sun</strong> machine c<strong>on</strong>figurati<strong>on</strong> has ample<br />

memory.<br />

The v440 has 4 CPUs with 16Gbytes of memory. A typical <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> user <strong>on</strong> <strong>the</strong> <strong>Sun</strong><br />

platform uses about 4Mbytes of memory while <strong>the</strong> CPU use is much higher in<br />

comparis<strong>on</strong>, so <strong>the</strong> higher memory footprint is a beneficial trade off.<br />

How does <strong>on</strong>e enable MTmalloc?<br />

1. Edit <strong>the</strong> file $SIEBEL_ROOT/bin/siebmtshw.<br />

2. Add <strong>the</strong> line LD_PRELOAD=/usr/lib/libmtmalloc.so.<br />

3. Save <strong>the</strong> file and bounce <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> servers.<br />

4. After <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> restarts, verify that MTmalloc is enabled by executing <strong>the</strong><br />

following command:<br />

$%pldd -p | grep -i mtmalloc<br />

7.1.2 Solaris Alternate Threads Library Usage<br />

Solaris 8 OS provides an alternate threads implementati<strong>on</strong> with a <strong>on</strong>e-level model (1x1).<br />

User-level threads are associated <strong>on</strong>e-to-<strong>on</strong>e with lightweight processes (LWPs). This<br />

implementati<strong>on</strong> is simpler than <strong>the</strong> standard (MxN) two-level model in which user-level<br />

threads are multiplexed over possibly fewer lightweight processes. The 1x1 model was<br />

used <strong>on</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Applicati<strong>on</strong> Servers and provided good performance improvements to <strong>the</strong><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> multithreaded applicati<strong>on</strong>s because <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2 alternate threads library is<br />

enabled by default. If you are using a versi<strong>on</strong> of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> older than 7.5.2 <strong>the</strong>n <strong>the</strong> alternate<br />

threads library is not enabled by default.<br />

Procedure to enable <strong>the</strong> alt thread feature:<br />

1. Update <strong>the</strong> env variable for <strong>the</strong> UNIX® user that runs <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server.<br />

2. If you are using Korn shell, open up <strong>the</strong> .profile for <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> owner<br />

and add this to <strong>the</strong> bottom of <strong>the</strong> file.<br />

3. Export LD_LIBRARY_PATH=/usr/lib/lwp:$LD_LIBRARY_PATH.<br />

4. Stop <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

5. Exit from shell and log in again so <strong>the</strong> change in .profile is in effect.<br />

6. Start <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

To verify if alt thread is indeed being used:<br />

pldd -p | grep -i lwp<br />

The preceding command should return <strong>the</strong> current Solaris lib, which is<br />

/usr/lib/lwp/.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 22


7.1.3 The Solaris Kernel and TCP/IP <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Parameters for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server<br />

Parameter Scope Default Tuned Value<br />

Value<br />

shmsys:shminfo_shmmax /etc/system 0xffffffffffffffff<br />

shmsys:shminfo_shmmin /etc/system 100<br />

shmsys:shminfo_shmseg /etc/system 200<br />

semsys:seminfo_semmns /etc/system 12092<br />

semsys:seminfo_semmsl /etc/system 512<br />

semsys:seminfo_semmni /etc/system 4096<br />

semsys:seminfo_semmap /etc/system 4096<br />

semsys:seminfo_semmnu /etc/system 4096<br />

semsys:seminfo_semopm /etc/system 100<br />

semsys:seminfo_semume /etc/system 2048<br />

msgsys:msginfo_msgmni /etc/system 2048<br />

msgsys:msginfo_msgtql /etc/system 2048<br />

msgsys:msginfo_msgssz /etc/system 64<br />

msgsys:msginfo_msgseg /etc/system 32767<br />

msgsys:msginfo_msgmax /etc/system 16384<br />

msgsys:msginfo_msgmnb /etc/system 16384<br />

ip:dohwcksum (for res<strong>on</strong>ate /etc/system<br />

gbic)<br />

0<br />

rlim_fd_max /etc/system 1024 16384<br />

rlim_fd_cur /etc/system 64 16384<br />

sq_max_size /etc/system 2 0<br />

tcp_time_wait_interval ndd /dev/tcp 240000 60000<br />

tcp_c<strong>on</strong>n_req_max_q ndd /dev/tcp 128 1024<br />

tcp_c<strong>on</strong>n_req_max_q0 ndd /dev/tcp 1024 4096<br />

tcp_ip_abort_interval ndd /dev/tcp 480000 60000<br />

tcp_keepalive_interval ndd /dev/tcp 7200000 900000<br />

tcp_rexmit_interval_initial ndd /dev/tcp 3000 3000<br />

tcp_rexmit_interval_max ndd /dev/tcp 240000 10000<br />

tcp_rexmit_interval_min ndd /dev/tcp 200 3000<br />

tcp_smallest_an<strong>on</strong>_port ndd /dev/tcp 32768 1024<br />

tcp_slow_start_initial ndd /dev/tcp 1 2<br />

tcp_xmit_hiwat ndd /dev/tcp 8129 32768<br />

tcp_fin_wait_2_flush_interval ndd/dev/tcp 67500 675000<br />

tcp_recv_hiwat ndd /dev/tcp 8129 32768<br />

Table 7.1.3<br />

7.2 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server for <strong>the</strong> Solaris OS<br />

The key factor in tuning <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server performance is <strong>the</strong> number of threads or users per<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> object manager (OM) process. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server architecture c<strong>on</strong>sists of multithreaded<br />

server processes servicing different business needs. Currently, <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> is<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 23


designed such that <strong>on</strong>e thread of a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OM services <strong>on</strong>e user sessi<strong>on</strong> or task. The<br />

ratio of threads or users/process is c<strong>on</strong>figured using <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> parameters:<br />

• MinMTServers<br />

• MaxMTServers<br />

• MaxTasks<br />

From several tests c<strong>on</strong>ducted it was found that <strong>on</strong> <strong>the</strong> Solaris platform with <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5 <strong>the</strong><br />

following users/OM ratios provided optimal performance:<br />

• Call Center – 80 users/OM<br />

• eChannel – 40 users/OM<br />

• eSales – 50 users/OM<br />

• eService – 60 users/OM<br />

As you can see, <strong>the</strong> optimal ratio of threads/process varies with <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OM and <strong>the</strong><br />

type of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> workload per user. MaxTasks divided by MaxMTServers determines <strong>the</strong><br />

number of users/process. For example, for 300 users <strong>the</strong> setting would be<br />

MinMtServers=6, MaxMTServers=6, Maxtasks=300. This would direct <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> to<br />

distribute <strong>the</strong> users across <strong>the</strong> 6 processes evenly, with 50 users <strong>on</strong> each. The noti<strong>on</strong> of<br />

an<strong>on</strong>ymous users must also be c<strong>on</strong>sidered in this calculati<strong>on</strong>, as discussed in <strong>the</strong> Call<br />

Center secti<strong>on</strong>, <strong>the</strong> eChannel secti<strong>on</strong>, and so <strong>on</strong>. The prstat –v or <strong>the</strong> top command<br />

shows how many threads or users or being serviced by a single multithreaded <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

process.<br />

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP<br />

1880 pspp 504M 298M cpu14 28 0 0:00.00 10% siebmtshmw/69<br />

1868 pspp 461M 125M sleep 58 0 0:00.00 2.5% siebmtshmw/61<br />

1227 pspp 687M 516M cpu3 22 0 0:00.03 1.6% siebmtshmw/62<br />

1751 pspp 630M 447M sleep 59 0 0:00.01 1.5% siebmtshmw/59<br />

1789 pspp 594M 410M sleep 38 0 0:00.02 1.4% siebmtshmw/60<br />

1246 pspp 681M 509M cpu20 38 0 0:00.03 1.2% siebmtshmw/62<br />

A thread count of more than 50 threads per process is due to <strong>the</strong> fact that <strong>the</strong> count also<br />

includes some admin threads. If <strong>the</strong> MaxTasks/MTservers ratio is greater than 100,<br />

performance degrades in terms of l<strong>on</strong>ger transacti<strong>on</strong> resp<strong>on</strong>se times. The optimal users<br />

per process setting depends <strong>on</strong> <strong>the</strong> workload, that is, how busy each user is.<br />

7.2.1 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> Call Center, Sales/Service, and eChannel <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Modules<br />

Call Center or sccobjmgr-specific informati<strong>on</strong> is presented here. Eighty users/process<br />

was found to be <strong>the</strong> optimal ratio.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 24


In <strong>the</strong> file $SIEBEL_ROOT/bin/enu/uagent.cfg:<br />

●<br />

●<br />

●<br />

Set EnableCDA = FALSE to disable invoking CDA functi<strong>on</strong>ality.<br />

Set CommEnable = FALSE for Call Center, to disable downloading <strong>the</strong> CTI<br />

bar.<br />

Set CommC<strong>on</strong>figManager = FALSE.<br />

It should be noted that <strong>the</strong> preceding settings are required to be TRUE for scenarios<br />

where <strong>the</strong>se functi<strong>on</strong>s are required. The benchmark setup required <strong>the</strong>se settings to be<br />

disabled.<br />

To decide <strong>on</strong> <strong>the</strong> MaxTasks, MTservers, An<strong>on</strong>UserPool settings, use <strong>the</strong> following<br />

example:<br />

Target No. of Users 4000<br />

An<strong>on</strong>UserPool 400<br />

Buffer 200<br />

Maxtasks 4600<br />

MaxMTserver=MinMTserver 58<br />

Table 7.2<br />

If <strong>the</strong> target number of users is 4000, <strong>the</strong>n An<strong>on</strong>UserPool is 10% of 4000, or 400. Allow<br />

a 5% buffer, add <strong>the</strong>m all toge<strong>the</strong>r (4000+400+200=4600), and MaxTasks would be<br />

4600. Since we want to run 80 users/process, <strong>the</strong> MaxMTserver value will be<br />

4600/80=57.5 (rounded to 58).<br />

The An<strong>on</strong>UserPool value is set in <strong>the</strong> eapps.cfg file <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Server<br />

Engine. If <strong>the</strong> load is distributed across multiple web server machines or instances,<br />

simply divide this number. In this example, if two web servers were being used, set<br />

An<strong>on</strong>UserPool to 200 in each web server’s eapps.cfg file. Here is a snapshot of <strong>the</strong><br />

file and <strong>the</strong> setting for Call Center:<br />

[/callcenter]<br />

An<strong>on</strong>Sessi<strong>on</strong>Timeout = 360<br />

GuestSessi<strong>on</strong>Timeout = 60<br />

Sessi<strong>on</strong>Timeout = 300<br />

An<strong>on</strong>UserPool = 200<br />

C<strong>on</strong>nectString = siebel.tcpip.n<strong>on</strong>e.n<strong>on</strong>e://19.1.1.18:2320/siebel/SCCObjMgr<br />

Set <strong>the</strong> following env variables in <strong>the</strong> file $SIEBEL_HOME/siebsrvr/siebenv.csh or<br />

siebenv.sh:<br />

export SIEBEL_ASSERT_MODE=0<br />

export SIEBEL_OSD_NLATCH = 7 * Maxtasks + 1000<br />

export SIEBEL_OSD_LATCH = 1.2 * Maxtasks<br />

Set Asserts OFF by setting <strong>the</strong> envir<strong>on</strong>ment variable SIEBEL_ASSERT_MODE=0.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 25


OSD LATCH will need to be set higher based <strong>on</strong> <strong>the</strong> number of users being run. This is<br />

<strong>the</strong> calculati<strong>on</strong> to be followed:<br />

SIEBEL_OSD_NLATCH = 7 * Maxtasks + 1000<br />

SIEBEL_OSD_LATCH = 1.2 * Maxtasks<br />

Restart <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server after setting <strong>the</strong>se envir<strong>on</strong>ment variables.<br />

eChannel<br />

The same tuning informati<strong>on</strong> as Call Center applies for eChannel, except for two<br />

parameters:The users per process ratio giving <strong>the</strong> best performance for this workload<br />

was 40. The optimal An<strong>on</strong>UserPool setting found was 30%.<br />

eService<br />

The same tuning informati<strong>on</strong> as Call Center applies for eService, except for two<br />

parameters. The users per process ratio giving <strong>the</strong> best performance for this workload<br />

was 60. The optimal An<strong>on</strong>UserPool setting found was 30%.<br />

eSales<br />

Fifty users per <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> object manager process provided better resp<strong>on</strong>se times and<br />

throughput. The An<strong>on</strong>UserPool setting is 20%.<br />

7.2.2 Workflow<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Workflow Manager executed workflow steps based <strong>on</strong> inserted service requests.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Workflow can be run in two modes: async or sync. Remote async mode is<br />

discussed here. The workflow test c<strong>on</strong>sists of 500 Call Center end users working <strong>on</strong><br />

20,000 service requests.<br />

C<strong>on</strong>necti<strong>on</strong> pooling of 20:1 was implemented with 10% benefit.Modify uagent.cfg and<br />

eai.cfg under $SIEBEL_HOME/bin of <strong>the</strong> Batch applicati<strong>on</strong> server to have <strong>the</strong><br />

following settings:<br />

●<br />

●<br />

Change EnableCDA=FALSE<br />

Under [SWE] add:<br />

<br />

<br />

<br />

<br />

EnableShuttle=TRUE<br />

EnableReportsFromToolbar=TRUE<br />

EnableSIDataLossWarning=TRUE<br />

EnablePopupInlineQuery=TRUE<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 26


7.2.3 Assignment Manager <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 Assignment Manager (AM) processed assignment transacti<strong>on</strong>s for sales<br />

opportunities based <strong>on</strong> employee positi<strong>on</strong>s and territories. AM was run in batch<br />

assignment mode.<br />

For this comp<strong>on</strong>ent <strong>the</strong> main tunable is <strong>the</strong> requests parameter.<br />

Tuneable parameter: Requests<br />

Default value: 5000<br />

Value used in benchmark: 20<br />

Descripti<strong>on</strong><br />

Maximum number of requests read per iterati<strong>on</strong>. This c<strong>on</strong>trols <strong>the</strong> maximum number of<br />

requests WorkM<strong>on</strong> reads from <strong>the</strong> requests queue within <strong>on</strong>e iterati<strong>on</strong>.<br />

It did not help to change deletesize to 400 (from default of 500) or to add indexes to<br />

<strong>the</strong> table S_ESCL_REQ .<br />

After reducing <strong>the</strong> number of requests per iterati<strong>on</strong> from 5000 (default value) to 20 for<br />

comp<strong>on</strong>ent WorkM<strong>on</strong>, CPU utilizati<strong>on</strong> was reduced from 72% to 53% <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

server node where AM and EAI-MQ tests were run toge<strong>the</strong>r.<br />

Through put (txns/sec) Value for <strong>the</strong> Requests Average CPU utilizati<strong>on</strong><br />

parameter<br />

78,511 5000 72.3<br />

4098 10 52.89<br />

8146 20 53.31<br />

Table 7.2.3<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 27


The procedure to list/change <strong>the</strong> parameter requests for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server follows:<br />

1. C<strong>on</strong>nect to server manager at applicati<strong>on</strong>s server level.<br />

2. srvrmgr:siebapp2> list param Requests for comp WorkM<strong>on</strong><br />

PA_ALIAS PA_VALUE PA_DATATYPE PA_SCOPE ..<br />

PA_NAME<br />

-------- -------- ----------- ---------<br />

----------------------<br />

Requests 5000 Integer Comp<strong>on</strong>ent ..<br />

Requests per iterati<strong>on</strong><br />

1 row returned.<br />

3. srvrmgr:siebapp2> change param Requests=20 for comp WorkM<strong>on</strong><br />

Command completed successfully<br />

Note: To see <strong>the</strong> entire list of parameters for comp WorkM<strong>on</strong>, type <strong>the</strong> following:<br />

srvrmgr:siebapp2> list param for comp WorkM<strong>on</strong><br />

7.2.4 EAI-MQseries<br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5 EAI MQ Series Adapter read from and placed transacti<strong>on</strong>s into IBM MQ<br />

Series queues. This test is designed to receive 400,000 messages from IBM MQ series<br />

into <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>. Messages are divided into different categories depending <strong>on</strong><br />

<strong>the</strong> type of operati<strong>on</strong>s <strong>the</strong>y perform during receive. This test stresses <strong>the</strong> file system <strong>on</strong><br />

which <strong>the</strong> queues reside by performing about 10% database inserts and 10% updates.<br />

Persistent queues are used. As a result, <strong>the</strong> database tuning and disk layout explained in<br />

this document are key for performance.<br />

To achieve best throughput <strong>the</strong> following setup was d<strong>on</strong>e:<br />

Minmtservers=maxmtservers=1, Maxtasks=45 for comp<strong>on</strong>ent MqSeriesSrvRcvr <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

server.<br />

Moving <strong>the</strong> following directories to a different disk helped to improve performance by<br />

alleviating I/O bottlenecks:<br />

/mqm/qmgrs/PERFQMGR/queues/SENDQUEUE - Sendqueue<br />

/mqm/qmgrs/PERFQMGR/queues/RECEIVEQUEUE - Receivequeue<br />

/mqm/qmgrs/PERFQMGR/queues/LOG(ACTIVE) - Active Logs<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 28


MQ series parameter tuning:<br />

PERFQMGR.OAMPipe_msg=2000000 (changed from default)<br />

in file /var/mqm/qmgrs/PERFQMGR/qmstatus.ini<br />

LogBufferPages=512 (changed from default 17)<br />

in file /var/mqm/qmgrs/PERFQMGR/qm_ini<br />

logPrimary files=12 (default value 2)<br />

LogSec<strong>on</strong>daryfiles=2 (default value 2)<br />

Logfilepages =1024 (changed to 16384(max))<br />

Logtype=CIRCULAR(default CIRCULAR)<br />

From all of <strong>the</strong> above tweaks, a performance improvement of 35% in throughput was<br />

measured. Note: It was discovered during <strong>the</strong> benchmark that MQ Series versi<strong>on</strong> 5.1/5.2<br />

has a 640,000 message limit, while 5.3 does not.<br />

7.2.5 EAI-HTTP Adapter<br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7 EAI-HTTP Adapter executed requests between different web<br />

infrastructures. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> EAI-HTTP Adapter Transport Business Service lets <strong>on</strong>e send XML<br />

messages over HTTP to a target URL (web site). The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Engine serves as <strong>the</strong><br />

transport to receive XML messages sent over <strong>the</strong> HTTP protocol to <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> EAI compgrp is enabled <strong>on</strong> <strong>on</strong>e appserver and <strong>the</strong> HTTP driver is run <strong>on</strong> a<br />

<strong>Sun</strong> Fire v65/Windows XP machine.<br />

To achieve <strong>the</strong> result of 287,352 business transacti<strong>on</strong>s/hour, 4 threads were run<br />

c<strong>on</strong>currently, where each thread worked <strong>on</strong> 70,000 records. The optimum values for <strong>the</strong><br />

number of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> servers of type EAIObjMgr for this workload were:<br />

MaxMTServers=2 minmtservers=2 MaxTasks=10<br />

The o<strong>the</strong>r <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OM parameters for EAIObjMgr are best left at default; changing<br />

SISPERSISSCON to anything from default of 20 does not help. All of <strong>the</strong> Oracle database<br />

optimizati<strong>on</strong>s explained under that secti<strong>on</strong> in this paper helped achieve <strong>the</strong> throughput of<br />

79 objects/sec.<br />

The HTTP driver machine was maxed out, causing slow performance. The reas<strong>on</strong> for this<br />

was I/O, as <strong>the</strong> HTTP test program was generating about 3Gbytes of logs during <strong>the</strong> test;<br />

<strong>on</strong>ce this was turned, off as shown below, performance improved.<br />

driver.logresp<strong>on</strong>se=false<br />

driver.prin<strong>the</strong>aders=false<br />

The An<strong>on</strong>Sessi<strong>on</strong>Timeout and Sessi<strong>on</strong>Timeout values in <strong>the</strong> SWSE were set to<br />

300.<br />

HTTP adapter (EAIObjMgr_enu) tunable parameters:<br />

● MaxTasks - Recommended value at <strong>the</strong> moment: 10<br />

● MaxMTServers, MinMTServers - Recommended value: 2<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 29


● UpperThreshold - Recommended value: 25<br />

● LoadBalanced - Recommended value: true (default value, of course)<br />

● Driver.Count (at client) - Recommended value: 4<br />

7.3 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Scalability Limitati<strong>on</strong>s and Soluti<strong>on</strong>s<br />

7.3.1 The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> MaxTasks Upper Limit Problem<br />

In order to c<strong>on</strong>figure <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server to run over 8000 c<strong>on</strong>current users, <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

parameter MaxTasks has to be set to a value of 8800. When this is d<strong>on</strong>e, <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

object manager processes (that is, <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> servers) failed to start and logged error<br />

messages. This failure was not due to any resource limitati<strong>on</strong> from <strong>the</strong> Solaris OS, as<br />

<strong>the</strong>re were ample amounts of CPU, memory, and swap space <strong>on</strong> <strong>the</strong> machine. The<br />

failure to start was due to <strong>the</strong> E2900 with 12 CPU, 48Gbytes RAM.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise server logged <strong>the</strong> following error messages and failed to start:<br />

GenericLog GenericError 1 2004-07-30 21:40:07 (sissrvr.cpp 47(2617)<br />

err=2000026 sys=17) SVR-00026: Unable to allocate shared memory<br />

GenericLog GenericError 1 2004-07-30 21:40:07 (scfsis.cpp 5(57)<br />

err=2000026 sys=0) SVR-00026: Unable to allocate shared memory<br />

GenericLog GenericError 1 2004-07-30 21:40:07 (listener.cpp 21(157)<br />

err=2000026 sys=0) SVR-00026: Unable to allocate shared memory<br />

Explanati<strong>on</strong><br />

To understand what occurred, it is necessary to review some background <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

server process siebsvc:<br />

The siebsvc process runs as a system service that m<strong>on</strong>itors and c<strong>on</strong>trols <strong>the</strong> state of<br />

every <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server comp<strong>on</strong>ent operating <strong>on</strong> that <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server. Each <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server is an<br />

instantiati<strong>on</strong> of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server System Service (siebsvc) within <strong>the</strong> current <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

Enterprise Server. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server runs as a daem<strong>on</strong> process in a UNIX envir<strong>on</strong>ment.<br />

During startup, <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server System Service (siebsvc) performs <strong>the</strong> following<br />

sequential steps:<br />

1. Retrieve c<strong>on</strong>figurati<strong>on</strong> informati<strong>on</strong> from <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Gateway Name Server.<br />

2. Create a shared memory file located in <strong>the</strong> "admin" subdirectory of <strong>the</strong><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server root directory <strong>on</strong> UNIX. By default, this file has <strong>the</strong> name<br />

Enterprise_Server_Name.<str<strong>on</strong>g>Siebel</str<strong>on</strong>g>_Server_Name.shm.<br />

The size of this file *.shm is directly proporti<strong>on</strong>al to <strong>the</strong> MaxTask setting. The higher <strong>the</strong><br />

number of c<strong>on</strong>current users <strong>the</strong> higher <strong>the</strong> MaxTask.<br />

The <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server System Service deletes this .shm file when it shuts down.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 30


Investigating <strong>the</strong> Reas<strong>on</strong> for <strong>the</strong> Failure<br />

The .shm file that was being created during startup will be memory-mapped and be part<br />

of <strong>the</strong> heap size of <strong>the</strong> siebsvc process. That means <strong>the</strong> process size of siebsvc<br />

process grows proporti<strong>on</strong>ally with <strong>the</strong> increase in <strong>the</strong> size of <strong>the</strong> .shm file.<br />

With 9500 MaxTask c<strong>on</strong>figured, an .shm file of size 1.15Gbytes was created during<br />

server startup:<br />

siebapp6:/tmp/%ls -l /export/siebsrvr/admin/siebel.sdcv480s002.shm<br />

-rwx------ 1 sunperf o<strong>the</strong>r 1212096512 Jan 24 11:44<br />

/export/siebsrvr/admin/siebel.sdcv480s002.shm<br />

And <strong>the</strong> siebsvc had a process size of 1.14Gbytes when <strong>the</strong> process died abruptly:<br />

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP<br />

25309 sunperf 1174M 1169M sleep 60 0 0:00:06 6.5% siebsvc/1<br />

A truss of <strong>the</strong> process reveals that it is trying to mmap a file 1.15Gbytes in size and fails<br />

with ENOMEM.<br />

Here is <strong>the</strong> truss output:<br />

8150: brk(0x5192BF78) = 0<br />

8150: open("/export/siebsrvr/admin/siebel_siebapp6.shm", O_RDWR|O_CREAT|O_EXCL, 0700)<br />

= 9<br />

8150: write(9, "\0\0\0\0\0\0\0\0\0\0\0\0".., 1367736320) = 1367736320<br />

8150: mmap(0x00000000, 1367736320, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) Err#12<br />

ENOMEM<br />

If <strong>the</strong> mmap succeeds, <strong>the</strong> process may have a process size > 2Gbytes. Because <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

is a 32-bit applicati<strong>on</strong>, it can have a process size up to 4Gbytes (2^32 = 4Gbytes) <strong>on</strong> <strong>the</strong><br />

Solaris OS. But in our case, <strong>the</strong> process failed with a size just over 2Gbytes.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 31


The following were <strong>the</strong> system resource settings from <strong>the</strong> failed machine:<br />

sdcv480s002:/export/home/sunperf/18306/siebsrvr/admin/%ulimit -a<br />

time(sec<strong>on</strong>ds) unlimited<br />

file(blocks)<br />

unlimited<br />

data(kbytes)<br />

unlimited<br />

stack(kbytes) 8192<br />

coredump(blocks) unlimited<br />

nofiles(descriptors) 256<br />

vmemory(kbytes) unlimited<br />

Even though <strong>the</strong> maximum size of <strong>the</strong> datasize or heap is reported as unlimited by <strong>the</strong><br />

ulimit command above, <strong>the</strong> max limit is 2Gbytes by default <strong>on</strong> <strong>the</strong> Solaris OS. The<br />

upper limits seen by an applicati<strong>on</strong> running <strong>on</strong> Solaris hardware can be achieved by<br />

using a simple C program that calls <strong>the</strong> getrlimit API from sys/resource.h. The<br />

following program prints <strong>the</strong> system limits for data, stack, and vmemory:<br />

#include <br />

#include <br />

#include <br />

#include <br />

#include <br />

static void showlimit(int resource, char* str)<br />

{<br />

struct rlimit lim;<br />

if (getrlimit(resource, &lim) != 0) {<br />

(void)printf("Couldn't retrieve %s limit\n", str);<br />

return;<br />

}<br />

(void)printf("Current/maximum %s limit is \t%lu / %lu\n", str, lim.rlim_cur,<br />

lim.rlim_max);<br />

}<br />

int main() {<br />

showlimit(RLIMIT_DATA, "data");<br />

showlimit(RLIMIT_STACK, "stack");<br />

showlimit(RLIMIT_VMEM, "vmem");<br />

}<br />

return 0;<br />

Output from <strong>the</strong> C program <strong>on</strong> <strong>the</strong> failed machine is as follows:<br />

sdcv480s002:/export/siebsrvr/admin/%showlimits<br />

Current/maximum data limit is 2147483647 / 2147483647<br />

Current/maximum stack limit is 8388608 / 2147483647<br />

Current/maximum vmem limit is 2147483647 / 2147483647<br />

From <strong>the</strong> output shown, it is clear that <strong>the</strong> processes were bound to a maximum data limit<br />

of 2Gbytes <strong>on</strong> an out-of-<strong>the</strong>-box Solaris system setup. This limitati<strong>on</strong> is <strong>the</strong> reas<strong>on</strong> for <strong>the</strong><br />

failure of <strong>the</strong> siebsvc process as it tried to grow bey<strong>on</strong>d 2Gbytes.<br />

Soluti<strong>on</strong><br />

The soluti<strong>on</strong> is to increase <strong>the</strong> default system limit for datasize and reduce <strong>the</strong><br />

stacksize. An increase in datasize creates more room for process address space<br />

and reducti<strong>on</strong> of stacksize reduces <strong>the</strong> reserved stack space. Both <strong>the</strong>se adjustments<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 32


let a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> process use its process address space more efficiently, hence allowing <strong>the</strong><br />

total <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> process size to grow up to 4Gbytes (which is <strong>the</strong> upper limit for a 32-bit<br />

applicati<strong>on</strong>).<br />

1. What are <strong>the</strong> recommended values for data and stack sizes <strong>on</strong> Solaris OS while<br />

running <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>? How does <strong>on</strong>e change <strong>the</strong> limits of datasize and<br />

stacksize?<br />

Set <strong>the</strong> datasize to 4Gbytes (that is, <strong>the</strong> maximum address space allowed for a 32-<br />

bit process) and set <strong>the</strong> stacksize to any value


file(blocks) unlimited unlimited<br />

data(kbytes) 4194303 4194303<br />

stack(kbytes) 512 512<br />

coredump(blocks) unlimited unlimited<br />

nofiles(descriptors) 65536 65536<br />

vmemory(kbytes) unlimited unlimited<br />

The preceding setting allows <strong>the</strong> siebsvc to mmap <strong>the</strong> .shm file and <strong>the</strong>reby siebsvc<br />

succeeds in forking <strong>the</strong> rest of <strong>the</strong> processes and <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server processes start up<br />

successfully. Thus this finding enables <strong>on</strong>e to c<strong>on</strong>figure MaxTasks greater than 9000 <strong>on</strong><br />

<strong>the</strong> Solaris OS. That means <strong>on</strong>e can get around <strong>the</strong> scalability limit of 9000 users per<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server, but how much higher can <strong>on</strong>e go? We determined <strong>the</strong> limit to be 18,000. If<br />

MaxTasks was c<strong>on</strong>figured to > 18000, <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> calculates <strong>the</strong> *.shm file size to be about<br />

2Gbytes. siebsvc tries to mmap this file and fails, as it has hit <strong>the</strong> limit for a 32-bit<br />

applicati<strong>on</strong> process.<br />

The following listing shows <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> .shm file with maxtask set to 18,000:<br />

sdcv480s002:/export/siebsrvr/admin/%ls -l *.shm ; ls -lh *.shm<br />

-rwx------ 1 sunperf o<strong>the</strong>r 2074238976 Jan 25 13:23 siebel.sdcv480s002.shm*<br />

-rwx------ 1 sunperf o<strong>the</strong>r 1.9G Jan 25 13:23 siebel.sdcv480s002.shm*<br />

sdcv480s002:/export/siebsrvr/admin/%plimit 12527<br />

12527: siebsvc -s siebsrvr -a -g sdcv480s002 -e siebel -s sdcv480s002<br />

resource current maximum<br />

time(sec<strong>on</strong>ds) unlimited unlimited<br />

file(blocks) unlimited unlimited<br />

data(kbytes) 4194303 4194303<br />

stack(kbytes) 512 512<br />

coredump(blocks) unlimited unlimited<br />

nofiles(descriptors) 65536 65536<br />

vmemory(kbytes) unlimited unlimited<br />

sdcv480s002:/export/siebsrvr/admin/%prstat -s size -u sunperf -a<br />

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP<br />

12527 sunperf 3975M 3966M sleep 59 0 0:00:55 0.0% siebsvc/3<br />

12565 sunperf 2111M 116M sleep 59 0 0:00:08 0.1% siebmtsh/8<br />

12566 sunperf 2033M 1159M sleep 59 0 0:00:08 3.1% siebmtshmw/10<br />

12564 sunperf 2021M 27M sleep 59 0 0:00:01 0.0% siebmtsh/12<br />

12563 sunperf 2000M 14M sleep 59 0 0:00:00 0.0% siebproc/1<br />

26274 sunperf 16M 12M sleep 59 0 0:01:43 0.0% siebsvc/2<br />

In <strong>the</strong> rare case <strong>the</strong>re is a need to run more than 18,000 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users <strong>on</strong> a single <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

server node, <strong>the</strong> way to do it is to install ano<strong>the</strong>r <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server instance <strong>on</strong> <strong>the</strong> same<br />

node. This works well and is a supported c<strong>on</strong>figurati<strong>on</strong> <strong>on</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>/<strong>Sun</strong> platform. It is not<br />

advisable to c<strong>on</strong>figure more MaxTasks than needed, as this could affect <strong>the</strong><br />

performance of <strong>the</strong> overall <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise.<br />

7.3.2 Bloated <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Processes (Comm<strong>on</strong>ly Mistaken as Memory Leaks)<br />

Symptom<br />

During some of <strong>the</strong> high c<strong>on</strong>current user tests, it was observed that hundreds of users<br />

suddenly failed.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 34


Descripti<strong>on</strong><br />

The reas<strong>on</strong> for this symptom was that some of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server processes servicing<br />

<strong>the</strong>se users had ei<strong>the</strong>r hung or crashed. Close observati<strong>on</strong> with repeated replays<br />

revealed that <strong>the</strong> Siebmtshmw process memory shot up from 700M to 2GB.<br />

The UltraSPARC IV-based E2900 server can handle up to 30,000 processes and about<br />

87,000 LWPs (threads), so <strong>the</strong>re was no resource limitati<strong>on</strong> from <strong>the</strong> machine here. The<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s are running into 32-bit process space limits -- but why is <strong>the</strong> memory<br />

per process going from 700Mbytes to 2GBytes? Fur<strong>the</strong>r debugging lead us to <strong>the</strong><br />

problem: <strong>the</strong> stack size.<br />

The total memory size of a process is made up of heap size + stack size + data<br />

and an<strong>on</strong> segments. In this case, it was found that <strong>the</strong> bloating in size was at <strong>the</strong> stack,<br />

which grew from 64Kbytes to 1Gbyte, which was abnormal. pmap is a utility <strong>on</strong> <strong>the</strong><br />

Solaris system that provides a detailed breakdown of memory sizes per process.<br />

Here is an output of pmap that shows that <strong>the</strong> stack segment bloated to about 1Gbyte:<br />

siebapp6@/export/pspp> grep stack pmap.4800mixed.prob.txt (pmap output unit is<br />

in Kbytes)<br />

FFBE8000 32 32 - 32 read/write/exec [ stack ]<br />

FFBE8000 32 32 - 32 read/write/exec [ stack ]<br />

FFBE8000 32 32 - 32 read/write/exec [ stack ]<br />

FFBE8000 32 32 - 32 read/write/exec [ stack ]<br />

FFBE8000 32 32 - 32 read/write/exec [ stack ]<br />

C012A000 1043224 1043224 - 1043224 read/write/exec [ stack ]<br />

We reduced <strong>the</strong> stack size to 128Kbytes and this fixed <strong>the</strong> problem. Now a single <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

server c<strong>on</strong>figured with 10,000 MaxTasks started up successfully. It was failing to startup<br />

before breaking at <strong>the</strong> mmap call. The stack size of 8Mytes is a default setting <strong>on</strong> <strong>the</strong><br />

Solaris system. However, limiting <strong>the</strong> stack size to 128Kbytes removed a lot of <strong>the</strong><br />

instability in our high load tests and we were able to run 5000 user tests without errors,<br />

pushing up to 80% to 90% CPU usage <strong>on</strong> <strong>the</strong> 12-way E2900 server.<br />

Changing <strong>the</strong> mainwin address MW_GMA_VADDR=0xc0000000 to o<strong>the</strong>r values did not<br />

seem to make a big difference. Please do not vary this parameter.<br />

Soluti<strong>on</strong><br />

Change <strong>the</strong> stack size hard limit of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> process from unlimited to 512 bytes.<br />

Stack size <strong>on</strong> <strong>the</strong> Solaris OS has a hard limit and a soft limit. The default values for <strong>the</strong>se<br />

two limits are unlimited and 8Mbytes, respectively. This means that an applicati<strong>on</strong><br />

process <strong>on</strong> <strong>the</strong> Solaris OS can have its stack size anywhere up to <strong>the</strong> hard limit. Since<br />

<strong>the</strong> default hard limit <strong>on</strong> <strong>the</strong> Solaris system is unlimited <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong><br />

processes could grow <strong>the</strong>ir stack size all <strong>the</strong> way up to 2Gbytes. When this occurs, <strong>the</strong><br />

total memory size of <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> process hits <strong>the</strong> max limit of memory addressable by a<br />

32-bit process and bad things happen (such as a hang or crash). Setting <strong>the</strong> limit for<br />

stack to 1Mbytes or a lower value resolved <strong>the</strong> issue.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 35


How is this d<strong>on</strong>e?<br />

sdcv480s002:/export/home/sunperf/18306/siebsrvr/admin/%limit stack<br />

stacksize 8192 kbytes<br />

sdcv480s002:/export/siebsrvr/admin/%limit stacksize 1024<br />

sdcv480s002:/export/siebsrvr/admin/%limit stacksize<br />

stacksize 512 kbytes<br />

Please note that a large stack limit can inhibit <strong>the</strong> growth of <strong>the</strong> data segment, because<br />

<strong>the</strong> total process size upper limit is 4Gbytes for a 32-bit applicati<strong>on</strong>. Also even if <strong>the</strong><br />

process stack hasn't grown to a large extent, <strong>the</strong> virtual memory space will be reserved<br />

for it according to <strong>the</strong> limit value. While <strong>the</strong> recommendati<strong>on</strong> to limit stack size to 512<br />

bytes worked well for <strong>the</strong> workload defined in this paper, this setting may have to be<br />

tweaked for different <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> deployments and workloads. The range could be from 512 to<br />

1024 bytes.<br />

7.4 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>Sun</strong> Java System Web Server<br />

The three main files where tuning can be d<strong>on</strong>e are obj.c<strong>on</strong>f, server.xml and<br />

magnus.c<strong>on</strong>f.<br />

Edit magnus.c<strong>on</strong>f:<br />

1. Set <strong>the</strong> RqThrottle=4028 in <strong>the</strong> magnus.c<strong>on</strong>f file under <strong>the</strong> web server<br />

root directory.<br />

2. ListenQ 16000.<br />

3. C<strong>on</strong>nQueueSize 8000.<br />

4. KeepAliveQueryMeanTime 50.<br />

Edit server.xml:<br />

1. Replaced host name with ip address in server.xml.<br />

Edit obj.c<strong>on</strong>f:<br />

1. Turned off access logging.<br />

2. Turned off cgi, jsp, servlet support.<br />

3. Removed <strong>the</strong> following lines from obj.c<strong>on</strong>f: ###PathCheck fn="checkacl"<br />

acl="default" and ###PathCheck fn=unix-uri-clean, since<br />

<strong>the</strong>se are not being used by <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

<str<strong>on</strong>g>Tuning</str<strong>on</strong>g> parameters used for high user load with <strong>Sun</strong> Java System Web Servers are<br />

listed in <strong>the</strong> following table.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 36


Parameter Scope Default Tuned Value<br />

Value<br />

shmsys:shminfo_shmmax /etc/system 0xfffffffffffff<br />

fff<br />

shmsys:shminfo_shmmin /etc/system 100<br />

shmsys:shminfo_shmseg /etc/system 200<br />

semsys:seminfo_semmns /etc/system 12092<br />

semsys:seminfo_semmsl /etc/system 512<br />

semsys:seminfo_semmni /etc/system 4096<br />

semsys:seminfo_semmap /etc/system 4096<br />

semsys:seminfo_semmnu /etc/system 4096<br />

semsys:seminfo_semopm /etc/system 100<br />

semsys:seminfo_semume /etc/system 2048<br />

msgsys:msginfo_msgmni /etc/system 2048<br />

msgsys:msginfo_msgtql /etc/system 2048<br />

msgsys:msginfo_msgssz /etc/system 64<br />

msgsys:msginfo_msgseg /etc/system 32767<br />

msgsys:msginfo_msgmax /etc/system 16384<br />

msgsys:msginfo_msgmnb /etc/system 16384<br />

rlim_fd_max /etc/system 1024 16384<br />

rlim_fd_cur /etc/system 64 16384<br />

sq_max_size /etc/system 2 0<br />

tcp_time_wait_interval ndd /dev/tcp 240000 60000<br />

tcp_c<strong>on</strong>n_req_max_q ndd /dev/tcp 128 1024<br />

tcp_c<strong>on</strong>n_req_max_q0 ndd /dev/tcp 1024 4096<br />

tcp_ip_abort_interval ndd /dev/tcp 480000 60000<br />

tcp_keepalive_interval ndd /dev/tcp 720000 900000<br />

0<br />

tcp_rexmit_interval_initial ndd /dev/tcp 3000 3000<br />

tcp_rexmit_interval_max ndd /dev/tcp 240000 10000<br />

tcp_rexmit_interval_min ndd /dev/tcp 200 3000<br />

tcp_smallest_an<strong>on</strong>_port ndd /dev/tcp 32768 1024<br />

tcp_slow_start_initial ndd /dev/tcp 1 2<br />

tcp_xmit_hiwat ndd /dev/tcp 8129 32768<br />

tcp_fin_wait_2_flush_interv ndd/dev/tcp 67500 675000<br />

al<br />

tcp_recv_hiwat ndd /dev/tcp 8129 32768<br />

Table 7.4<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 37


7.5 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Server Extensi<strong>on</strong> (SWSE)<br />

1. In <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Web Plugin installati<strong>on</strong> directory, go to <strong>the</strong> bin directory. Edit <strong>the</strong><br />

eapps.cfg file and make <strong>the</strong> following changes:<br />

• Set An<strong>on</strong>UserPool to 15% of .<br />

• Set <strong>the</strong> following settings in <strong>the</strong> “default” secti<strong>on</strong>:<br />

5. GuestSessi<strong>on</strong>Timeout to 60; this is required for scenarios where <strong>the</strong><br />

user is browsing without logging in.<br />

6. An<strong>on</strong>Sessi<strong>on</strong>Timeout = 300<br />

7. Sessi<strong>on</strong>Timeout = 300<br />

• Set appropriate An<strong>on</strong>User name/passwords<br />

SADMIN/SADMIN for eChannel and Call Center (Database login)<br />

GUEST1/GUEST1 for eService and eSales (LDAP login)<br />

GUESTERM/GUESTERM for ERM (Database login)<br />

The An<strong>on</strong>UserPool setting can vary <strong>on</strong> different types of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users (call center,<br />

sales, and so <strong>on</strong>).<br />

The table below summarizes <strong>the</strong> individual eapps.cfg settings for each type of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

applicati<strong>on</strong>: Call Center, eChannel, eSales and eService, used in <strong>the</strong> 10,000 users test.<br />

Parameter callcenter_enu prmportal_enu esales_enu eservice_enu<br />

An<strong>on</strong>UserName SADMIN GUESTCP eApps2 eApps3<br />

An<strong>on</strong>Password SADMIN GUESTCP eApps2 eApps3<br />

An<strong>on</strong>UserPool<br />

CC1=420,CC2=525,<br />

CC3=315, CC4=210<br />

320 160 360<br />

An<strong>on</strong>Sessi<strong>on</strong>Timeout 360 360 360 360<br />

GuestSessi<strong>on</strong><br />

Timeout<br />

60 60 60 60<br />

Sessi<strong>on</strong>Timeout 300 300 300 300<br />

Table 7.5<br />

The SWSE stats web page is a very good resource for tuning <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 38


7.6 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Standard Oracle Database and <strong>Sun</strong> Storage<br />

The size of <strong>the</strong> database used was approximately 140 Gbytes. The database was built to<br />

simulate customers with large transacti<strong>on</strong> volumes and data distributi<strong>on</strong>s that<br />

represented <strong>the</strong> most comm<strong>on</strong> customer data shapes. Table 7.6 shows a sampling of<br />

record volumes and sizes in <strong>the</strong> database for key business entities of <strong>the</strong> standard <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

volume database.<br />

Business Entity<br />

Database Table<br />

Name<br />

Number of<br />

Records<br />

Size in Kbytes<br />

Accounts S_ORG_EXT 1,897,161 3,145,728<br />

Activities S_EVT_ACT 8,744,305 6,291,456<br />

Addresses S_ADDR_ORG 3,058,666 2,097,152<br />

C<strong>on</strong>tacts S_CONTACTS 3,366,764 4,718,592<br />

Employees S_EMPLOYEE_ATT 21,000 524<br />

Opportunities S_OPTY 3,237,794 4,194,304<br />

Orders S_ORDER 355,297 471,859<br />

Products S_PROD_INT 226,000 367,001<br />

Quote Items S_QUOTE_ITEM 1,984,099 2,621,440<br />

Quotes S_QUOTE_ATT 253,614 524<br />

Service Requests S_SRV_REQ 5,581,538 4,718,592<br />

Table 7.6<br />

7.6.1 Optimal Database C<strong>on</strong>figurati<strong>on</strong><br />

Creating a well-planned database to begin with will require less tuning/reorganizing<br />

during runtime. While many resources are available to facilitate creati<strong>on</strong> of highperformance<br />

Oracle databases, most tuning engineers would find <strong>the</strong>mselves tweaking a<br />

database c<strong>on</strong>sisting of thousands of tables and indexes, piece by piece. This is both time<br />

c<strong>on</strong>suming and pr<strong>on</strong>e to mistakes. Eventually <strong>on</strong>e would end up rebuilding <strong>the</strong> entire<br />

database from scratch.<br />

The following approach provides an alternative to tuning a pre-existing, pre-packaged<br />

database:<br />

1. Measure <strong>the</strong> exact space used by each object in <strong>the</strong> schema. The<br />

dbms_space packages provide <strong>the</strong> accurate space used by an index or a table.<br />

O<strong>the</strong>r sources like dba_free_space <strong>on</strong>ly tell you “how much is free” from <strong>the</strong><br />

total allocated space, which is always more. Next, run <strong>the</strong> benchmark test and<br />

again measure <strong>the</strong> space used. The difference will result in an accurate report of<br />

“how much each table/index grows during <strong>the</strong> test.” One can use this data to rightsize<br />

all of <strong>the</strong> tables, such as <strong>the</strong> capacity planned for growth during <strong>the</strong> test. Also<br />

<strong>the</strong> data can be used to figure out and c<strong>on</strong>centrate <strong>on</strong> <strong>on</strong>ly <strong>the</strong> hot tables used by<br />

<strong>the</strong> test.<br />

2. Create a new database with multiple index and data tablespaces. The idea is<br />

to place all equi-extent-sized tables into <strong>the</strong>ir own tablespace. Keeping <strong>the</strong> data<br />

and index objects in <strong>the</strong>ir own tablespace reduces c<strong>on</strong>tenti<strong>on</strong> and fragmentati<strong>on</strong>,<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 39


and also provides for easier m<strong>on</strong>itoring. Keeping tables with equal extent sizes in<br />

<strong>the</strong>ir own tablespace reduces fragmentati<strong>on</strong> as old and new extent allocati<strong>on</strong>s are<br />

always of <strong>the</strong> same size within a given tablespace, leaving no room for empty oddsized<br />

pockets in between. This leads to compact data placement which reduces<br />

<strong>the</strong> number of I/Os d<strong>on</strong>e.<br />

3. Build a script to create all of <strong>the</strong> tables and indexes. This script should have<br />

<strong>the</strong> tables being created in <strong>the</strong>ir appropriate tablespaces with <strong>the</strong> appropriate<br />

parameters like freelists, freelist_groups, pctfree, pctused, and so <strong>on</strong>.<br />

Use this script to place all of <strong>the</strong> tables in <strong>the</strong>ir tablespaces and <strong>the</strong>n import <strong>the</strong><br />

data. This will result in a clean, defragmented, optimized, and right-sized<br />

database.<br />

The tablespaces should also be built to be locally managed. This allows <strong>the</strong> space<br />

management to be d<strong>on</strong>e locally within <strong>the</strong> tablespace, unlike default (dicti<strong>on</strong>ary<br />

managed) tablespaces that write to <strong>the</strong> system tablespace for every extent change. The<br />

list of hot tables for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> is available in Appendix B.<br />

7.6.2 Properly Locating Data <strong>on</strong> <strong>the</strong> Disk for Best <str<strong>on</strong>g>Performance</str<strong>on</strong>g><br />

To achieve <strong>the</strong> incredible capacity <strong>on</strong> current disk drives, disk manufacturers have<br />

implemented z<strong>on</strong>e bit recording. This means that <strong>the</strong> outer edge of <strong>the</strong> disk drive has<br />

more available storage area than <strong>the</strong> inside edge of <strong>the</strong> disk drive; that is, <strong>the</strong> number of<br />

sectors per track decreases as you move toward <strong>the</strong> center of <strong>the</strong> disk. Disk drive<br />

manufactures take advantage of this situati<strong>on</strong> by recording more data <strong>on</strong> <strong>the</strong> outer<br />

edges. Since <strong>the</strong> disk drive rotates at a c<strong>on</strong>stant speed, <strong>the</strong> outer tracks have faster<br />

transfer rates than <strong>the</strong> inner tracks. For example, a Seagate 36 Gbyte Cheetah 1 drive has<br />

a data transfer speed range of 57 MBytes/sec <strong>on</strong> <strong>the</strong> inner tracks to 86 Mbytes/sec <strong>on</strong><br />

<strong>the</strong> outer tracks -- a 50% improvement in transfer speed.<br />

For benchmarking purposes, it is desirable to:<br />

1. Place active large block transfers <strong>on</strong> <strong>the</strong> outer edges of <strong>the</strong> disk to minimize data<br />

transfer time.<br />

2. Place active random small block transfers <strong>on</strong> <strong>the</strong> outer edges of <strong>the</strong> disk drive <strong>on</strong>ly<br />

if active large block transfers are not in <strong>the</strong> benchmark.<br />

3. Place inactive random small block transfers <strong>on</strong> <strong>the</strong> inner secti<strong>on</strong>s of disk drive to<br />

minimize <strong>the</strong> impact of <strong>the</strong> data transfer speed discrepancies.<br />

Fur<strong>the</strong>r, if <strong>the</strong> benchmark <strong>on</strong>ly deals with small block I/Os, like SPC Benchmark-1 2<br />

benchmarking, <strong>the</strong> priority is to put <strong>the</strong> most active LUNs <strong>on</strong> <strong>the</strong> outer edge and <strong>the</strong> less<br />

active LUNs <strong>on</strong> <strong>the</strong> inner edge of <strong>the</strong> disk drive.<br />

1 The Cheetah 15 KB RPM disk drive datasheet can be found at www.seagate.com.<br />

2 Fur<strong>the</strong>r informati<strong>on</strong> about SPC Benchmark-1 TM can be found at www.Storage<str<strong>on</strong>g>Performance</str<strong>on</strong>g>.org.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 40


86 MB/sec<strong>on</strong>d<br />

57 MB/sec<strong>on</strong>d<br />

Figure 7.6.2<br />

Figure 7.6.2 shows a z<strong>on</strong>e bit recording example with five z<strong>on</strong>es. The outer edge holds<br />

<strong>the</strong> most data and has <strong>the</strong> fastest transfer rate.<br />

7.6.3 Disk Layout and Oracle Data Partiti<strong>on</strong>ing<br />

An I/O subsystem with less c<strong>on</strong>tenti<strong>on</strong> and high throughput is key for obtaining high<br />

performance with Oracle. After analyzing <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> workload an appropriate design was<br />

made.<br />

The I/O subsystem c<strong>on</strong>sisted of <strong>Sun</strong> StorEdge SE6320 c<strong>on</strong>nected to <strong>the</strong> E2900<br />

database server via fiber channel. The SE6320 has 2 base + 2 expansi<strong>on</strong> arrays driven<br />

through two c<strong>on</strong>trollers. Each tray c<strong>on</strong>sists of 14x36Gbyte disks at 15,000 RPM. That<br />

means <strong>the</strong>re are 56 total disks providing over 2 Tbytes of total storage. Each tray has a<br />

cache of 1Gbyte. All of <strong>the</strong> trays were formatted in <strong>the</strong> RAID 0 mode and two LUNs per<br />

tray were created. Eight striped volumes of 300 Gbytes each were carved. Each volume<br />

was striped across seven physical disks with stripe sizes of 64KB. Eight filesystems (ufs)<br />

were built <strong>on</strong> top of <strong>the</strong>se striped volumes: T4disk1, T4disk2, T4disk3, T4disk4,<br />

T4disk5, T4disk6, T4disk7, T4disk8.<br />

The following example shows <strong>the</strong> tuning d<strong>on</strong>e at <strong>the</strong> disk array level. Please note that<br />

cache writebehind was turned off and disk scrubber was turned off.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 41


array00:/:sys list<br />

c<strong>on</strong>troller : 2.5<br />

blocksize<br />

: 64k<br />

cache<br />

: auto<br />

mirror<br />

: auto<br />

mp_support<br />

: n<strong>on</strong>e<br />

naca<br />

: off<br />

rd_ahead<br />

: off<br />

rec<strong>on</strong>_rate<br />

: med<br />

sys memsize : 256 MBytes<br />

cache memsize : 1024 MBytes<br />

fc_topology : auto<br />

fc_speed<br />

: 2Gb<br />

disk_scrubber : off<br />

<strong>on</strong>dg<br />

: befit<br />

array00:/:


DATA_51200K<br />

INDX_512K<br />

DATA_5120K<br />

TEMP<br />

TOOLS<br />

DATA_512K<br />

SYSTEM<br />

Tablespace to hold <strong>the</strong> medium <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> tables.<br />

Tablespace for all <strong>the</strong> indexes <strong>on</strong> small tables.<br />

Tablespace for <strong>the</strong> small <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> tables.<br />

Oracle temporary segments.<br />

Oracle performance measurement objects.<br />

Tablespace for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> small tables.<br />

Oracle system tablespace.<br />

Tablespace to Logical Volume Mapping<br />

DATA_512000K<br />

/t3disk2/oramst/oramst_data_512000K.01<br />

/t3disk2/oramst/oramst_data_512000K.04<br />

/t3disk2/oramst/oramst_data_512000K.07<br />

/t3disk2/oramst/oramst_data_512000K.10<br />

/t3disk3/oramst/oramst_data_512000K.02<br />

/t3disk3/oramst/oramst_data_512000K.05<br />

/t3disk3/oramst/oramst_data_512000K.08<br />

/t3disk3/oramst/oramst_data_512000K.11<br />

/t3disk4/oramst/oramst_data_512000K.03<br />

/t3disk4/oramst/oramst_data_512000K.06<br />

DATA_51200K<br />

DATA_5120K<br />

DATA_512K<br />

INDX_51200K<br />

INDX_5120K<br />

INDX_512K<br />

RBS<br />

/t3disk2/oramst/oramst_data_51200K.02<br />

/t3disk3/oramst/oramst_data_51200K.03<br />

/t3disk4/oramst/oramst_data_51200K.01<br />

/t3disk4/oramst/oramst_data_51200K.04<br />

/t3disk3/oramst/oramst_data_5120K.01<br />

/t3disk2/oramst/oramst_data_512K.01<br />

/t3disk5/oramst/oramst_indx_51200K.02<br />

/t3disk5/oramst/oramst_indx_51200K.05<br />

/t3disk5/oramst/oramst_indx_51200K.08<br />

/t3disk5/oramst/oramst_indx_51200K.11<br />

/t3disk6/oramst/oramst_indx_51200K.03<br />

/t3disk6/oramst/oramst_indx_51200K.06<br />

/t3disk6/oramst/oramst_indx_51200K.09<br />

/t3disk6/oramst/oramst_indx_51200K.12<br />

/t3disk7/oramst/oramst_indx_51200K.01<br />

/t3disk7/oramst/oramst_indx_51200K.04<br />

/t3disk7/oramst/oramst_indx_51200K.07<br />

/t3disk7/oramst/oramst_indx_51200K.10<br />

/t3disk5/oramst/oramst_indx_5120K.02<br />

/t3disk6/oramst/oramst_indx_5120K.03<br />

/t3disk7/oramst/oramst_indx_5120K.01<br />

/t3disk5/oramst/oramst_indx_512K.01<br />

/t3disk5/oramst/oramst_indx_512K.04<br />

/t3disk6/oramst/oramst_indx_512K.02<br />

/t3disk6/oramst/oramst_indx_512K.05<br />

/t3disk7/oramst/oramst_indx_512K.03<br />

/t3disk2/oramst/oramst_rbs.01<br />

/t3disk2/oramst/oramst_rbs.07<br />

/t3disk2/oramst/oramst_rbs.13<br />

/t3disk3/oramst/oramst_rbs.02<br />

/t3disk3/oramst/oramst_rbs.08<br />

/t3disk3/oramst/oramst_rbs.14<br />

/t3disk4/oramst/oramst_rbs.03<br />

/t3disk4/oramst/oramst_rbs.09<br />

/t3disk4/oramst/oramst_rbs.15<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 43


t3disk5/oramst/oramst_rbs.04<br />

/t3disk5/oramst/oramst_rbs.10<br />

/t3disk5/oramst/oramst_rbs.16<br />

/t3disk6/oramst/oramst_rbs.05<br />

/t3disk6/oramst/oramst_rbs.11<br />

/t3disk6/oramst/oramst_rbs.17<br />

/t3disk7/oramst/oramst_rbs.06<br />

/t3disk7/oramst/oramst_rbs.12<br />

/t3disk7/oramst/oramst_rbs.18<br />

SYSTEM<br />

TEMP<br />

TOOLS<br />

/t3disk2/oramst/oramst_system.01<br />

/t3disk7/oramst/oramst_temp.12<br />

/t3disk2/oramst/oramst_tools.01<br />

/t3disk3/oramst/oramst_tools.02<br />

/t3disk4/oramst/oramst_tools.03<br />

/t3disk5/oramst/oramst_tools.04<br />

/t3disk6/oramst/oramst_tools.05<br />

/t3disk7/oramst/oramst_tools.06<br />

With <strong>the</strong> preceding setup of Oracle using <strong>the</strong> hardware level stripping, and placing Oracle<br />

objects in different tablespaces, an optimal setup was reached with no I/O waits noticed<br />

and no single disk being more than 20% occupied during <strong>the</strong> tests. Veritas was not used<br />

as this setup provided <strong>the</strong> required I/O throughput.<br />

Following is <strong>the</strong> Iostat output: two snapshots at five-sec<strong>on</strong>d intervals, taken during<br />

steady state of <strong>the</strong> test <strong>on</strong> <strong>the</strong> database server. There are minimal reads (r/s), and<br />

writes are balanced across all volumes. With <strong>the</strong> excepti<strong>on</strong> of c7t1d0, this is <strong>the</strong><br />

dedicated T4+ array for Oracle redologs. It is quite normal to see high writes/sec <strong>on</strong><br />

redologs, this just indicates that transacti<strong>on</strong>s are getting d<strong>on</strong>e rapidly in <strong>the</strong> database.<br />

The reads/sec is abnormal. This volume is at 27% busy, which is c<strong>on</strong>sidered borderline<br />

high. Fortunately, service times are very low.<br />

Wed Jan 8 15:25:20 2003<br />

extended device statistics<br />

r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br />

0.0 1.0 0.0 5.6 0.0 0.0 0.0 1.0 0 1 c0t3d0<br />

1.0 27.6 8.0 220.8 0.0 0.0 0.0 1.3 0 2 c2t7d0<br />

0.0 26.0 0.0 208.0 0.0 0.0 0.0 0.5 0 1 c3t1d0<br />

0.6 37.4 4.8 299.2 0.0 0.0 0.0 0.8 0 2 c4t5d0<br />

0.0 23.4 0.0 187.2 0.0 0.0 0.0 0.6 0 1 c5t1d0<br />

0.0 10.2 0.0 81.6 0.0 0.0 0.0 0.5 0 0 c6t1d0<br />

3.8 393.0 1534.3 3143.8 0.0 0.2 0.0 0.6 0 22 c7t1d0<br />

0.0 28.2 0.0 225.6 0.0 0.0 0.0 0.6 0 1 c8t1d0<br />

Wed Jan 8 15:25:25 2003<br />

extended device statistics<br />

r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device<br />

1.4 19.8 11.2 158.4 0.0 0.0 0.0 1.0 0 2 c2t7d0<br />

0.0 18.2 0.0 145.6 0.0 0.0 0.0 0.5 0 1 c3t1d0<br />

0.8 39.0 6.4 312.0 0.0 0.0 0.0 0.7 0 2 c4t5d0<br />

0.0 18.8 0.0 150.4 0.0 0.0 0.0 0.6 0 1 c5t1d0<br />

0.0 17.4 0.0 139.2 0.0 0.0 0.0 0.5 0 1 c6t1d0<br />

5.2 463.2 1844.8 3705.6 0.0 0.3 0.0 0.7 0 27 c7t1d0<br />

0.0 29.4 0.0 235.2 0.0 0.0 0.0 0.5 0 1 c8t1d0<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 44


7.6.4 Solaris MPSS <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> for Oracle Server<br />

Available as a standard feature since Solaris 9 OS, Multiple Page Size Support (MPSS)<br />

allows a program to use any hardware-supported page sizes to access porti<strong>on</strong>s of virtual<br />

memory. MPSS improves virtual memory performance by allowing applicati<strong>on</strong>s to use<br />

large page sizes, <strong>the</strong>refore improving resource efficiency and reducing overhead, and<br />

accomplishes this without recompiling or recoding applicati<strong>on</strong>s.<br />

Enable MPSS (Multiple Page Size Support) for Oracle Server and shadow processes <strong>on</strong><br />

Solaris 9 OS or later versi<strong>on</strong>s to reduce <strong>the</strong> TLB miss% rate. It is recommended to use<br />

<strong>the</strong> largest available page size if <strong>the</strong> TLB miss% is high.<br />

Enabling MPSS for Oracle Processes<br />

1. Enable <strong>the</strong> kernel cage if <strong>the</strong> machine is not an E10K or F15K, and reboot <strong>the</strong><br />

system. Kernel cage an be enabled by <strong>the</strong> following setting in /etc/system:<br />

set kernel_cage_enable=1<br />

Why do we need <strong>the</strong> kernel cage? To address a problem with memory<br />

fragmentati<strong>on</strong>. Immediately after a system boot, a sizeable pool of large pages are<br />

available and <strong>the</strong> applicati<strong>on</strong>s can get all of <strong>the</strong>ir mmap() memory allocated from<br />

large pages. This can be verified using pmap -xs . If <strong>the</strong> machine has<br />

been in use for a while, <strong>the</strong> applicati<strong>on</strong> may not get <strong>the</strong> desirable large pages until<br />

<strong>the</strong> machine is rebooted. This is mainly due to <strong>the</strong> fragmentati<strong>on</strong> of physical<br />

memory.<br />

We can vastly minimize <strong>the</strong> fragmentati<strong>on</strong> by enabling <strong>the</strong> kernel cage. With <strong>the</strong><br />

kernel cage enabled, <strong>the</strong> kernel will be allocated from a small c<strong>on</strong>tiguous range of<br />

memory, minimizing <strong>the</strong> fragmentati<strong>on</strong> of o<strong>the</strong>r pages within <strong>the</strong> system.<br />

2. Find out all possible hardware address translati<strong>on</strong> (HAT) sizes supported by <strong>the</strong><br />

system with pagesize -a.<br />

$ pagesize -a<br />

8192<br />

65536<br />

524288<br />

4194304<br />

3. Run trapstat -T. The value shown in <strong>the</strong> ttl row and %time column is <strong>the</strong><br />

percentage of time <strong>the</strong> processor(s) spent in virtual-to-physical memory address<br />

translati<strong>on</strong>s. Depending <strong>on</strong> %time, make a wise choice of a pagesize that will help<br />

reduce reducing <strong>the</strong> iTLB/dTLB miss rate.<br />

4. Create a simple c<strong>on</strong>fig file for MPSS as follows:<br />

oracle*::<br />

Desirable heap and stack size must be <strong>on</strong>e of <strong>the</strong> supported HAT sizes. By<br />

default, 8Kbytes is <strong>the</strong> page size for heap and stack <strong>on</strong> all Solaris releases.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 45


5. Set <strong>the</strong> envir<strong>on</strong>ment variables MPSSCFGFILE and MPSSERRFILE. MPSSCFGFILE<br />

should point to <strong>the</strong> c<strong>on</strong>fig file that was created in Step 3. MPSS writes any errors<br />

during runtime to file.<br />

6. Preload MPSS interposing library mpss.so.1, and bring up Oracle server. It is<br />

recommended to put <strong>the</strong> MPSSCFGFILE, MPSSERRFILE, and LD_PRELOAD<br />

envir<strong>on</strong>ment variables in <strong>the</strong> Oracle startup script.<br />

With all <strong>the</strong> env variables menti<strong>on</strong>ed above, a typical startup script may look like<br />

<strong>the</strong> following:<br />

echo starting listener<br />

lsnrctl start<br />

echo preloading mpss.so.1 ..<br />

MPSSCFGFILE=/tmp/mpsscfg<br />

MPSSERRFILE=/tmp/mpsserr<br />

LD_PRELOAD=/usr/lib/mpss.so.1:$LD_PRELOAD<br />

export MPSSCFGFILE MPSSERRFILE LD_PRELOAD<br />

echo starting oracle server processes ..<br />

sqlplus /nolog


<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Object Name Type<br />

Growth in<br />

Bytes<br />

S_DOCK_TXN_LOG TABLE 1,177,673,728<br />

S_EVT_ACT TABLE 190,341,120<br />

S_DOCK_TXN_LOG_P1 INDEX 96,116,736<br />

S_DOCK_TXN_LOG_F1 INDEX 52,600,832<br />

S_ACT_EMP TABLE 46,202,880<br />

S_SRV_REQ TABLE 34,037,760<br />

S_AUDIT_ITEM TABLE 29,818,880<br />

S_OPTY_POSTN TABLE 28,180,480<br />

S_ACT_EMP_M1 INDEX 25,600,000<br />

S_EVT_ACT_M1 INDEX 23,527,424<br />

S_EVT_ACT_M5 INDEX 22,519,808<br />

S_ACT_EMP_U1 INDEX 21,626,880<br />

S_ACT_CONTACT TABLE 21,135,360<br />

S_EVT_ACT_U1 INDEX 18,391,040<br />

S_ACT_EMP_M3 INDEX 16,850,944<br />

S_EVT_ACT_M9 INDEX 16,670,720<br />

S_EVT_ACT_M7 INDEX 16,547,840<br />

S_AUDIT_ITEM_M2 INDEX 16,277,504<br />

S_ACT_EMP_P1 INDEX 15,187,968<br />

S_AUDIT_ITEM_M1 INDEX 14,131,200<br />

Table 7.6.5<br />

7.6.6 Oracle Parameters <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

Here are <strong>the</strong> key Oracle init.ora tunables that were tuned.<br />

db_cache_size=3048576000<br />

The preceding parameter determines <strong>the</strong> size for Oracle’s SGA (Shared Global Area).<br />

Database performance is highly dependent <strong>on</strong> available memory. In general, more<br />

memory increases caching, which reduces physical I/O to <strong>the</strong> disks. Oracle’s SGA is a<br />

memory regi<strong>on</strong> in <strong>the</strong> applicati<strong>on</strong> that caches database tables and o<strong>the</strong>r data for<br />

processing. With 32-bit Oracle software <strong>on</strong> a 64-bit Solaris OS <strong>the</strong> SGA is limited to 4<br />

Gbytes.<br />

Oracle comes in two basic architectures: 64-bit and 32-bit. The number of address bits<br />

determines <strong>the</strong> maximum size of <strong>the</strong> virtual address space.<br />

32-bits = 2 32 = 4 GB maximum<br />

64-bits = 2 64 = 16777216 TB maximum<br />

For <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 10,000 c<strong>on</strong>current users' PSPP workload, <strong>the</strong> 4Gbyte SGA was sufficient.<br />

As a result, <strong>the</strong> 32-bit Oracle server versi<strong>on</strong> was used.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 47


db_block_max_dirty_target=0<br />

db_writer_processes=4<br />

Fast_start_io_target=0<br />

The default value of db_block_max_dirty_target was changed from 4294967294 to<br />

0. Setting this value to 0 disables writing of buffers for incremental checkpointing<br />

purposes. This stops deferred checkpointing and achieves CPU and bus improvements.<br />

The default value for db_writer_processes is 1. Changing db_writer_processes<br />

to 4 starts up four dbwr processes. These three parameter changes drastically reduced<br />

wait times in <strong>the</strong> database and thus improved <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> overall throughput.<br />

Db_block_size=8K – default value is 2K. An 8K value for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> is optimal.<br />

DB_BLOCK_LRU_LATCHES=48 – Specifies <strong>the</strong> upper bound of <strong>the</strong> number of LRU<br />

latch sets. Set this parameter to a value equal to <strong>the</strong> desired number of LRU latch sets.<br />

Oracle decides whe<strong>the</strong>r to use this value or reduce it based <strong>on</strong> a number of internal<br />

checks. If <strong>the</strong> parameter is not set, Oracle calculates a value for <strong>the</strong> number of sets<br />

based <strong>on</strong> <strong>the</strong> number of CPUs. The value calculated by Oracle is usually adequate;<br />

increase this value <strong>on</strong>ly if misses are higher than 3% in V$LATCH. For <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 10,000<br />

users run <strong>on</strong> a four CPU machine, setting <strong>the</strong> value to 48 was optimal.<br />

distributed_transacti<strong>on</strong>s=0 – Setting this value to 0 disables <strong>the</strong> Oracle<br />

background process called reco. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> does not use distributed transacti<strong>on</strong>s, meaning<br />

that we can regain CPU and bus by having <strong>on</strong>e less Oracle background process. The<br />

default value is 99.<br />

replicati<strong>on</strong>_dependency_tracking=FALSE<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> does not use replicati<strong>on</strong> so it is safe to turn this off by setting it to false.<br />

transacti<strong>on</strong>_auditing=FALSE<br />

Writes less redo for every commit. It is <strong>the</strong> nature of <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OLTP to do many small<br />

transacti<strong>on</strong>s with frequent commits. Setting transacti<strong>on</strong>_auditing to false buys<br />

back CPU and bus.<br />

Here is <strong>the</strong> complete listing of <strong>the</strong> init.ora file used with all <strong>the</strong> parameters set for<br />

10,000 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users.<br />

Oracle 9.2.0.2 init.ora file<br />

# Oracle 9.2.0.2 init.ora for solaris 9, running up to 10000 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5.2 user<br />

benchmark.<br />

db_block_size=8192<br />

db_cache_size=3048576000<br />

db_domain=""<br />

db_name=oramst<br />

background_dump_dest=/export/pspp/oracle/admin/oramst/bdump<br />

core_dump_dest=/export/pspp/oracle/admin/oramst/cdump<br />

timed_statistics=FALSE<br />

user_dump_dest=/export/pspp/oracle/admin/oramst/udump<br />

c<strong>on</strong>trol_files=("/export/pspp/oracle/admin/oramst/ctl/c<strong>on</strong>trol01.ctl",<br />

"/export/pspp/oracle/admin/oramst/ctl/c<strong>on</strong>trol02.ctl",<br />

"/export/pspp/oracle/admin/oramst/ctl/c<strong>on</strong>trol03.ctl")<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 48


instance_name=oramst<br />

job_queue_processes=0<br />

aq_tm_processes=0<br />

compatible=9.2.0.0.0<br />

hash_join_enabled=TRUE<br />

query_rewrite_enabled=FALSE<br />

star_transformati<strong>on</strong>_enabled=FALSE<br />

java_pool_size=0<br />

large_pool_size=8388608<br />

shared_pool_size=503316480<br />

processes=2500<br />

pga_aggregate_target=25165824<br />

rollback_segments =<br />

(rb_001,rb_002,rb_003,rb_004,rb_005,rb_006,rb_007,rb_008,rb_009,rb_010,rb_011,rb<br />

_012,rb_013,rb_014,rb_015,rb_016,rb_017,rb_018,rb_019,rb_020,rb_021,rb_022,rb_02<br />

3,rb_024,rb_025,rb_026,rb_027,rb_028,rb_029,rb_030,rb_031,rb_032,rb_033,rb_034,r<br />

b_035,rb_036,rb_037,rb_038,rb_039,rb_040,rb_041,rb_042,rb_043,rb_044,rb_045,rb_0<br />

46,rb_047,rb_048,rb_049,rb_050,rb_051,rb_052,rb_053,rb_054,rb_055,rb_056,rb_057,<br />

rb_058,rb_059,rb_060,rb_061,rb_062,rb_063,rb_064,rb_065,rb_066,rb_067,rb_068,rb_<br />

069,rb_070,rb_071,rb_072,rb_073,rb_074,rb_075,rb_076,rb_077,rb_078,rb_079,rb_080<br />

,rb_081,rb_082,rb_083,rb_084,rb_085,rb_086,rb_087,rb_088,rb_089,rb_090,rb_091,rb<br />

_092,rb_093,rb_094,rb_095,rb_096,rb_097,rb_098,rb_099,rb_100)<br />

log_checkpoint_timeout=10000000000000000<br />

nls_sort=BINARY<br />

sort_area_size = 10485760<br />

sort_area_retained_size = 10485760<br />

nls_date_format<br />

= "MM-DD-YYYY:HH24:MI:SS"<br />

transacti<strong>on</strong>_auditing<br />

= false<br />

replicati<strong>on</strong>_dependency_tracking = false<br />

sessi<strong>on</strong>_cached_cursors = 8000<br />

open_cursors=4048<br />

cursor_space_for_time<br />

= TRUE<br />

db_file_multiblock_read_count = 8 # stripe size is 64K and not 1M<br />

db_block_checksum=FALSE<br />

log_buffer = 10485760<br />

optimizer_mode=RULE<br />

filesystemio_opti<strong>on</strong>s=setall<br />

pre_page_sga=TRUE<br />

fast_start_mttr_target=0<br />

db_writer_processes=6<br />

distributed_transacti<strong>on</strong>s = 0<br />

transacti<strong>on</strong>_auditing<br />

= FALSE<br />

replicati<strong>on</strong>_dependency_tracking = falsE<br />

#timed_statistics<br />

max_rollback_segments = 120<br />

job_queue_processes = 0<br />

java_pool_size = 0<br />

db_block_lru_latches = 48<br />

db_writer_processes = 4<br />

sessi<strong>on</strong>_cached_cursors = 8000<br />

FAST_START_IO_TARGET = 0<br />

= TRUE # turned off<br />

DB_BLOCK_MAX_DIRTY_TARGET = 0<br />

pre_page_sga<br />

= TRUE<br />

7.6.7 Solaris Kernel Parameters <strong>on</strong> Oracle Database Server<br />

Parameter Scope Default<br />

Value<br />

Tuned<br />

Value<br />

shmsys:shminfo_shmmax /etc/system 0xffffffffffffffff<br />

shmsys:shminfo_shmmin /etc/system 100<br />

shmsys:shminfo_shmseg /etc/system 200<br />

semsys:seminfo_semmns /etc/system 16384<br />

semsys:seminfo_semmsl /etc/system 4096<br />

semsys:seminfo_semmni /etc/system 4096<br />

semsys:seminfo_semmap /etc/system 4096<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 49


semsys:seminfo_semmnu /etc/system 4096<br />

semsys:seminfo_semopm /etc/system 4096<br />

semsys:seminfo_semume /etc/system 2048<br />

semsys:seminfo_semvmx /etc/system 32767<br />

semsys:seminfo_semaem /etc/system 16384<br />

msgsys:msginfo_msgmni /etc/system 4096<br />

msgsys:msginfo_msgtql /etc/system 4096<br />

msgsys:msginfo_msgmax /etc/system 16384<br />

msgsys:msginfo_msgmnb /etc/system 16384<br />

rlim_fd_max /etc/system 1024 16384<br />

rlim_fd_cur /etc/system 64 16384<br />

Table 7.6.7<br />

7.6.8 SQL Query <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

During <strong>the</strong> course of <strong>the</strong> test, <strong>the</strong> most resource-intensive and l<strong>on</strong>g-running queries were<br />

tracked. In general, <strong>the</strong> best way to tune a query is to change <strong>the</strong> SQL statement<br />

(keeping <strong>the</strong> result set <strong>the</strong> same). The o<strong>the</strong>r method is to add or drop indexes so <strong>the</strong><br />

executi<strong>on</strong> plan changes. The latter method is <strong>the</strong> <strong>on</strong>ly opti<strong>on</strong> in most benchmark tests.<br />

We added four additi<strong>on</strong>al indexes to <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> schema which helped performance. With<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.5 <strong>the</strong>re is no support for CBO (cost-based optimizati<strong>on</strong>) with Oracle database.<br />

CBO support is available in <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> 7.7.<br />

The following example shows <strong>on</strong>e of <strong>the</strong> resource-c<strong>on</strong>suming queries.<br />

Buffer Gets Executi<strong>on</strong>s Gets per Exec % Total Hash Value<br />

--------------- ------------ -------------- ------- ------------<br />

220,402,077 35,696 6,174.4 33.2 2792074251<br />

This query was resp<strong>on</strong>sible for 33% of <strong>the</strong> total buffer gets from all queries during <strong>the</strong><br />

benchmark tests.<br />

SELECT<br />

T4.LAST_UPD_BY,<br />

T4.ROW_ID,<br />

T4.CONFLICT_ID,<br />

T4.CREATED_BY,<br />

T4.CREATED,<br />

T4.LAST_UPD,<br />

T4.MODIFICATION_NUM,<br />

T1.PRI_LST_SUBTYPE_CD,<br />

T4.SHIP_METH_CD,<br />

T1.PRI_LST_NAME,<br />

T4.SUBTYPE_CD,<br />

T4.FRGHT_CD,<br />

T4.NAME,<br />

T4.BU_ID,<br />

T3.ROW_ID,<br />

T2.NAME,<br />

T1.ROW_ID,<br />

T4.CURCY_CD,<br />

T1.BU_ID,<br />

T4.DESC_TEXT,<br />

T4.PAYMENT_TERM_ID<br />

FROM<br />

ORAPERF.S_PRI_LST_BU T1,<br />

ORAPERF.S_PAYMENT_TERM T2,<br />

ORAPERF.S_PARTY T3,<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 50


ORAPERF.S_PRI_LST T4<br />

WHERE<br />

T4.PAYMENT_TERM_ID = T2.ROW_ID (+)<br />

AND<br />

T1.BU_ID = :V1<br />

AND<br />

T4.ROW_ID = T1.PRI_LST_ID<br />

AND<br />

T1.BU_ID = T3.ROW_ID<br />

AND<br />

((T1.PRI_LST_SUBTYPE_CD != 'COST LIST'<br />

AND<br />

T1.PRI_LST_SUBTYPE_CD != 'RATE LIST')<br />

AND<br />

(T4.EFF_START_DT = TO_DATE(:V3,'MM/DD/YYYY HH24:MI:SS'))<br />

AND<br />

T1.PRI_LST_NAME LIKE :V4<br />

AND<br />

T4.CURCY_CD = :V5))<br />

ORDER BY T1.BU_ID, T1.PRI_LST_NAME;<br />

Executing plan and statistics before <strong>the</strong> new index was added:<br />

Executi<strong>on</strong> Plan<br />

----------------------------------------------------------<br />

0 SELECT STATEMENT Optimizer=RULE<br />

1 0 NESTED LOOPS (OUTER)<br />

2 1 NESTED LOOPS<br />

3 2 NESTED LOOPS<br />

4 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST_BU'<br />

5 4 INDEX (RANGE SCAN) OF 'S_PRI_LST_BU_M1' (NON-UNIQUE)<br />

6 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST'<br />

7 6 INDEX (UNIQUE SCAN) OF 'S_PRI_LST_P1' (UNIQUE)<br />

8 2 INDEX (UNIQUE SCAN) OF 'S_PARTY_P1' (UNIQUE)<br />

9 1 TABLE ACCESS (BY INDEX ROWID) OF 'S_PAYMENT_TERM'<br />

10 9 INDEX (UNIQUE SCAN) OF 'S_PAYMENT_TERM_P1' (UNIQUE)<br />

Statistics<br />

----------------------------------------------------------<br />

364 recursive calls<br />

1 db block gets<br />

41755 c<strong>on</strong>sistent gets<br />

0 physical reads<br />

0 redo size<br />

754550 bytes sent via SQL*Net to client<br />

27817 bytes received via SQL*Net from client<br />

341 SQL*Net roundtrips to/from client<br />

4 sorts (memory)<br />

0 sorts (disk)<br />

5093 rows processed<br />

New Index created<br />

create index S_PRI_LST_X2 <strong>on</strong> S_PRI_LST<br />

(CURCY_CD, EFF_END_DT, EFF_START_DT)<br />

STORAGE(INITIAL 512K NEXT 512K<br />

MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0 FREELISTS 7 FREELIST<br />

GROUPS 7 BUFFER_POOL DEFAULT) TABLESPACE INDX NOLOGGING PARALLEL 4 ;<br />

As shown from <strong>the</strong> difference in statistics, <strong>the</strong> new index caused c<strong>on</strong>sistent reads to<br />

reduce by about 100%.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 51


Executi<strong>on</strong> Plan<br />

----------------------------------------------------------<br />

0 SELECT STATEMENT Optimizer=RULE<br />

1 0 SORT (ORDER BY)<br />

2 1 NESTED LOOPS<br />

3 2 NESTED LOOPS<br />

4 3 NESTED LOOPS (OUTER)<br />

5 4 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST'<br />

6 5 INDEX (RANGE SCAN) OF 'S_PRI_LST_X2' (NON-UNIQUE<br />

7 4 TABLE ACCESS (BY INDEX ROWID) OF 'S_PAYMENT_TERM'<br />

8 7 INDEX (UNIQUE SCAN) OF 'S_PAYMENT_TERM_P1' (UNIQUE)<br />

9 3 TABLE ACCESS (BY INDEX ROWID) OF 'S_PRI_LST_BU'<br />

10 9 INDEX (RANGE SCAN) OF 'S_PRI_LST_BU_U1' (UNIQUE)<br />

11 2 INDEX (UNIQUE SCAN) OF 'S_PARTY_P1' (UNIQUE)<br />

Statistics<br />

----------------------------------------------------------<br />

0 recursive calls<br />

0 db block gets<br />

27698 c<strong>on</strong>sistent gets<br />

0 physical reads<br />

0 redo size<br />

754550 bytes sent via SQL*Net to client<br />

27817 bytes received via SQL*Net from client<br />

341 SQL*Net roundtrips to/from client<br />

1 sorts (memory)<br />

0 sorts (disk)<br />

5093 rows processed<br />

Similarly, <strong>the</strong> three o<strong>the</strong>r new indexes added were:<br />

• create index S_CTLG_CAT_PROD_F1 <strong>on</strong> ORAPERF.S_CTLG_CAT_PROD (CTLG_CAT_ID ASC) PCTFREE 10<br />

INITRANS 2<br />

MAXTRANS 255 STORAGE (INITIAL 5120K NEXT 5120K MINEXTENTS 2 MAXEXTENTS UNLIMITED<br />

PCTINCREASE 0 FREELISTS 47 FREELIST GROUPS 47 BUFFER_POOL DEFAULT) TABLESPACE<br />

INDX_5120K NOLOGGING;<br />

• create index S_PROG_DEFN_X1 <strong>on</strong> ORAPERF.S_PROG_DEFN<br />

(NAME, REPOSITORY_ID)<br />

STORAGE(INITIAL 512K NEXT 512K<br />

MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0<br />

BUFFER_POOL DEFAULT) TABLESPACE INDX_512K NOLOGGING;<br />

• create index S_ESCL_OBJECT_X1 <strong>on</strong> ORAPERF.S_ESCL_OBJECT<br />

(NAME, REPOSITORY_ID, INACTIVE_FLG)<br />

STORAGE(INITIAL 512K NEXT 512K<br />

MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0<br />

BUFFER_POOL DEFAULT) TABLESPACE INDX_512K NOLOGGING;<br />

The last two indexes are for assignment manager tests. No inserts occurred <strong>on</strong> <strong>the</strong> base<br />

tables during <strong>the</strong>se tests.<br />

7.6.9 Rollback Segment <str<strong>on</strong>g>Tuning</str<strong>on</strong>g><br />

Incorrect number and size of rollback segments will cause bad performance. The right<br />

number and size of rollback segments depends <strong>on</strong> <strong>the</strong> applicati<strong>on</strong> workload. Most OLTP<br />

workloads require several small-sized rollback segments. The number of rollback<br />

segments should be equal or more than <strong>the</strong> number of c<strong>on</strong>current active transacti<strong>on</strong>s in<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 52


<strong>the</strong> database during peak load. In <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> tests this was about 80 during a 10,000 user<br />

test.<br />

The size of each rollback segment should be approximately equal to <strong>the</strong> size in bytes of a<br />

user transacti<strong>on</strong>. For <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> workload 100 rollback segments of 20Mbytes with extent<br />

size of 1Mbyte was found suitable. Note: If <strong>the</strong> rollback segment sizing is larger than<br />

required by <strong>the</strong> applicati<strong>on</strong>, <strong>on</strong>e would end up wasting valuable space in <strong>the</strong> database<br />

cache.<br />

The new feature from oracle UNDO Segments, which can be used instead of Rollback<br />

Segments, was not tested during this project with <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>.<br />

7.6.10 Database C<strong>on</strong>nectivity Using Host Names Adapter<br />

It has been observed that high-end <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> test runs using Oracle as <strong>the</strong> backend have<br />

performed better when client-to-Oracle server c<strong>on</strong>nectivity is d<strong>on</strong>e via <strong>the</strong> hostnames<br />

adapter feature. This is an Oracle feature that allows for an alternate method of<br />

c<strong>on</strong>necting to <strong>the</strong> Oracle database server without using <strong>the</strong> tnsnames.ora file.<br />

Set <strong>the</strong> GLOBAL_DBNAME to something o<strong>the</strong>r than <strong>the</strong> Oracle_SID in <strong>the</strong><br />

Listener.ora file <strong>on</strong> <strong>the</strong> database server as shown below. Bounce <strong>the</strong> Oracle listener<br />

after making this change.<br />

LISTENER =<br />

(DESCRIPTION_LIST =<br />

(DESCRIPTION =<br />

(ADDRESS_LIST =<br />

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))<br />

)<br />

(ADDRESS_LIST =<br />

(ADDRESS = (PROTOCOL = TCP)(HOST = dbserver)(PORT = 1521))<br />

)<br />

)<br />

(DESCRIPTION =<br />

(PROTOCOL_STACK =<br />

(PRESENTATION = GIOP)<br />

(SESSION = RAW)<br />

)<br />

(ADDRESS = (PROTOCOL = TCP)(HOST = dbserver)(PORT = 2481))<br />

)<br />

)<br />

SID_LIST_LISTENER =<br />

(SID_LIST =<br />

(SID_DESC =<br />

(SID_NAME = PLSExtProc)<br />

(ORACLE_HOME = /export/pspp/oracle)<br />

(PROGRAM = extproc)<br />

)<br />

(SID_DESC =<br />

(GLOBAL_DBNAME = oramst.dbserver)<br />

(ORACLE_HOME = /export/pspp/oracle)<br />

(SID_NAME = oramst)<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 53


On <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s server, go into <strong>the</strong> Oracle client installati<strong>on</strong> and delete (or<br />

rename) <strong>the</strong> tnsnames.ora and sqlnet.ora files. You no l<strong>on</strong>ger need <strong>the</strong>se as<br />

Oracle now c<strong>on</strong>nects to <strong>the</strong> database by resolving <strong>the</strong> name from <strong>the</strong> /etc/hosts file.<br />

As root edit <strong>the</strong> file /etc/hosts <strong>on</strong> <strong>the</strong> client machine (that is, <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> applicati<strong>on</strong>s<br />

server in this case) and add an entry like <strong>the</strong> following:<br />

<br />

oramst.dbserver<br />

The name oramst.dbserver should be whatever you have provided as <strong>the</strong><br />

GLOBAL_DBNAME in <strong>the</strong> listener .ora file. This becomes <strong>the</strong> c<strong>on</strong>nect string to c<strong>on</strong>nect to<br />

this database from any client.<br />

7.6.11 High I/O with Oracle Shadow Processes C<strong>on</strong>nected to <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

The disk <strong>on</strong> which Oracle binaries were installed was close to 100% occupied during <strong>the</strong><br />

peak load of 1000 c<strong>on</strong>current users. This problem was diagnosed to be <strong>the</strong> well-known<br />

oraus.msb problem. On Oracle clients that make use of OCI (Oracle Call Interface), <strong>the</strong><br />

OCI driver makes thousands of calls to translate messages from <strong>the</strong> oraus.msb file.<br />

This problem has been documented by Oracle under bug ID 2142623.<br />

The <strong>Sun</strong> workaround for this problem is to cache <strong>the</strong> oraus.msb file in memory and<br />

translate <strong>the</strong> file access and system calls to user calls and memory operati<strong>on</strong>s. The<br />

caching soluti<strong>on</strong> is dynamic, and code changes are not needed. With <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> this<br />

workaround helped in reducing <strong>the</strong> calls. The 100% occupied I/O went away and a 4%<br />

reducti<strong>on</strong> in CPU was observed. also, <strong>the</strong> resp<strong>on</strong>se times for transacti<strong>on</strong>s got a 11%<br />

boost.<br />

This problem is supposed to have been fixed in Oracle 9.2.0.4. Details <strong>on</strong> how to<br />

implement <strong>the</strong> <strong>Sun</strong> workaround by LD_PRELOAD of an interpose library are available at<br />

http://developers.sun.com/solaris/articles/oci_cache.html.<br />

7.7 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Database C<strong>on</strong>necti<strong>on</strong> Pooling<br />

The database c<strong>on</strong>necti<strong>on</strong> pooling feature built into <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server software provides<br />

better performance. A users:database c<strong>on</strong>necti<strong>on</strong> ratio of 20:1 has been proven to<br />

provide good results with <str<strong>on</strong>g>Siebel</str<strong>on</strong>g>7.5/Oracle9202. This reduced about 3% CPU at <strong>the</strong><br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server, as fewer c<strong>on</strong>necti<strong>on</strong>s were made from <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server to database during a<br />

2000 user Call Center test. <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> memory/user was 33% lower and Oracle memory/user<br />

was 79% lower, because 20 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users share <strong>the</strong> same Oracle c<strong>on</strong>necti<strong>on</strong>.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> an<strong>on</strong>ymous users do not use c<strong>on</strong>necti<strong>on</strong> pooling. If <strong>the</strong> an<strong>on</strong>user count is set<br />

too high (that is, higher than <strong>the</strong> recommended 10 to 20%) you would end up wasting <strong>the</strong><br />

tasks, as MaxTasks is a number inclusive of real users. Also <strong>the</strong> an<strong>on</strong> sessi<strong>on</strong>s do not<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 54


use c<strong>on</strong>necti<strong>on</strong> pooling, so you would have lots of <strong>on</strong>e-to-<strong>on</strong>e c<strong>on</strong>necti<strong>on</strong>s and use<br />

increased memory and CPU <strong>on</strong> both <strong>the</strong> database server and <strong>the</strong> applicati<strong>on</strong>s server.<br />

How to Enable C<strong>on</strong>necti<strong>on</strong> Pooling<br />

Set <strong>the</strong> following <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> parameters at <strong>the</strong> server level via <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> thin client GUI or<br />

svrmgr:<br />

MaxSharedDbC<strong>on</strong>ns integer full <br />

MinSharedDbC<strong>on</strong>ns integer full <br />

MaxTrxDbC<strong>on</strong>ns integer full <br />

Then bounce <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server. For example, if you are set up to run 1000 users, <strong>the</strong><br />

value for is 1000/20=50. Set all of three<br />

parameters to <strong>the</strong> same value (50) to direct <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> to share a single database c<strong>on</strong>necti<strong>on</strong><br />

for 20 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> users or tasks.<br />

srvrmgr:SIEBSRVR> change param MaxTasks=1100, MaxMTServers=20,<br />

MinMTServers=20, MinSharedDbC<strong>on</strong>ns=50, MaxSharedDbC<strong>on</strong>ns=50,<br />

MinTrxDbC<strong>on</strong>ns=50 for comp esalesobjmgr_enu<br />

How to Check if C<strong>on</strong>necti<strong>on</strong> Pooling is Enabled<br />

During <strong>the</strong> steady state, log in to dbserver and ps -eaf | grep NO | wc -l<br />

This should return around 50 for this example; if this acti<strong>on</strong> returns 1000 <strong>the</strong>n c<strong>on</strong>necti<strong>on</strong><br />

pooling is not in effect.<br />

7.8 <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <strong>Sun</strong> Java System Directory Server (LDAP)<br />

In order to prevent c<strong>on</strong>necti<strong>on</strong>s from timing out, change <strong>the</strong> following parameter <strong>on</strong> <strong>the</strong><br />

LDAP directory server: idletimeout = 15 secs. Also increase <strong>the</strong> cache entries<br />

parameter for LDAP from <strong>the</strong> default to 25,000. To change <strong>the</strong>se parameters:<br />

1. Stop <strong>the</strong> LDAP Server.<br />

2. Change <strong>the</strong> value associated with cache entries in slapd.c<strong>on</strong>f.<br />

3. Change <strong>the</strong> value associated with idletimeout in slapd.c<strong>on</strong>f to 15.<br />

4. Restart <strong>the</strong> LDAP Server.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 55


8 <str<strong>on</strong>g>Performance</str<strong>on</strong>g> Tweaks with No Gains<br />

This secti<strong>on</strong> is most important as it sheds light <strong>on</strong> various myths of performance tuning.<br />

Sometimes a tunable that you think is sure to result in a performance gain, does not – but<br />

it ends up being counted as if it did. These are tunables that mostly help o<strong>the</strong>r<br />

applicati<strong>on</strong>s in a different scenario or are default settings already in effect. This secti<strong>on</strong><br />

lists out some of <strong>the</strong> tuning parameters that provided no gain when tested.<br />

Please note: These observati<strong>on</strong>s are specific to <strong>the</strong> workload, architecture, software versi<strong>on</strong>s,<br />

and so <strong>on</strong> used during <strong>the</strong> project at SUN ETC labs. The workload is described in Chapter 4.<br />

The outcome of certain tuneables may vary when implemented with a different workload <strong>on</strong><br />

a different architecture/c<strong>on</strong>figurati<strong>on</strong>.<br />

Changing <strong>the</strong> mainwin address MW_GMA_VADDR=0xc0000000<br />

Changing this to o<strong>the</strong>r values did not seem to make a big difference to performance. This<br />

value is set in <strong>the</strong> siebenv file.<br />

Solaris Kernel Parameter stksize<br />

The default value for this is 16k, or 0x4000 <strong>on</strong> <strong>Sun</strong>4U (Ultra) architecture machines<br />

booted in 64 bit mode (which is always <strong>the</strong> default). Increasing this to 24k, 0x6000 by <strong>the</strong><br />

below settings in /etc/system file did not provide any gains in performance during <strong>the</strong><br />

tests.<br />

set rpcmod:svc_default_stksize=0x6000<br />

set lwp_default_stksize=0x6000<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Parameters<br />

1. Recycle Factor. Enabling this did not provide any performance gains. Default is<br />

disabled.<br />

2. SISSPERSISSCONN. This is <strong>the</strong> parameter that changes <strong>the</strong> multiplexing ratio<br />

between <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server and web server; its default value is 20. Varying this value<br />

did not result in any performance gains for <strong>the</strong> specific modules tested in this<br />

project with <strong>the</strong> PSPP standard workload (defined in Chapter 4).<br />

Res<strong>on</strong>ate: Cache size -- RES_PERSIST_BLOCK_SIZE<br />

This is an envir<strong>on</strong>ment variable which is set inside <strong>the</strong> res<strong>on</strong>ate startup script. This<br />

was changed to as high as 240Kbytes and <strong>the</strong>re was no difference in behavior.<br />

<strong>Sun</strong> Java System Web Server maxprocs<br />

This parameter, when changed from <strong>the</strong> default of 1, starts up more than <strong>on</strong>e ns-httpd<br />

(web server process). No gain was measured with a value greater than 1. It was better to<br />

use a new web server instance.<br />

Database C<strong>on</strong>necti<strong>on</strong> Pooling with <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Comp<strong>on</strong>ents<br />

Enabling this for <strong>the</strong> server comp<strong>on</strong>ent batch workload caused performance to degrade<br />

such that server processes could not start. Some of <strong>the</strong> server comp<strong>on</strong>ent modules<br />

c<strong>on</strong>nect to <strong>the</strong> database using ODBC, which does not support c<strong>on</strong>necti<strong>on</strong> pooling.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 56


Oracle Database Server<br />

1. Larger than required sharedpoolsize. For a 10,000 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> user benchmark, a<br />

value of 400Mbytes was more than sufficient. Too large a value for<br />

sharedpoolsize wastes valuable database cache memory.<br />

2. Large SGA. A larger than required SGA size is not going to improve performance,<br />

whereas too small an SGA size will degrade performance.<br />

3. Large RBS. Being larger than required by an applicati<strong>on</strong> would waste space in <strong>the</strong><br />

database cache. It is better to make <strong>the</strong> applicati<strong>on</strong> commit more often.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 57


9 Scripts, Tips, and Tricks for Diagnosing <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong><br />

<strong>Platform</strong><br />

We found <strong>the</strong> following tips to be very helpful for diagnosing performance and scalability<br />

issues while running <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> platform.<br />

9.1 To M<strong>on</strong>itor <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Open Sessi<strong>on</strong> Statistics<br />

You can use <strong>the</strong> following URLs to m<strong>on</strong>itor <strong>the</strong> amount of time that a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> end user<br />

transacti<strong>on</strong> is taking within your <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> enterprise. This data is updated near realtime.<br />

Call<br />

Center<br />

eSales<br />

eService<br />

eChannel<br />

http://webserver:port/callcenter_enu/_stats.swe?verbose=high<br />

http://webserver:port/esales_enu/_stats.swe?verbose=high<br />

http://webserver:port/eservice_enu/_stats.swe?verbose=high<br />

http://webserver:port/prmportal_enu/_stats.swe?verbose=high<br />

Table 10.1<br />

The stat page provides a lot of diagnostic informati<strong>on</strong>. Watch out for any rows that are in<br />

bold. They represent requests that have been waiting for over 10 sec<strong>on</strong>ds.<br />

9.2 To List <strong>the</strong> Parameter Settings for a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server<br />

Use <strong>the</strong> following server command to list all parameter settings for <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> server:<br />

srvrmgr> list params for server servername comp comp<strong>on</strong>ent show PA_ALIAS,<br />

PA_VALUE<br />

Parameters of interest are MaxMTServers, MinMTSServers, MaxTasks,<br />

MinSharedDbC<strong>on</strong>ns, MaxSharedDbC<strong>on</strong>ns, MinTrxDbC<strong>on</strong>ns.<br />

9.3 To Find Out All OMs Currently Running for a Comp<strong>on</strong>ent<br />

Use <strong>the</strong> following server command to find out all OMs that are running in a <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

enterprise, sorted by <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> comp<strong>on</strong>ent type:<br />

$> srvrmgr /g gateway /e enterprise /u sadmin /p sadmin -c "list tasks for comp<br />

comp<strong>on</strong>ent show SV_NAME, CC_ALIAS, TK_PID, TK_DISP_RUNSTATE" | grep Running | sort<br />

-k 3 | uniq -c | sort -k 2,2 -k 1,1<br />

The following output came from <strong>on</strong>e of <strong>the</strong> runs:<br />

47 siebelapp2_1 eChannelObjMgr_enu 19913 Running<br />

132 siebelapp2_1 eChannelObjMgr_enu 19923 Running<br />

133 siebelapp2_1 eChannelObjMgr_enu 19918 Running<br />

158 siebelapp2_1 eChannelObjMgr_enu 19933 Running<br />

159 siebelapp2_1 eChannelObjMgr_enu 19928 Running<br />

118 siebelapp2_1 eSalesObjMgr_enu 19943 Running<br />

132 siebelapp2_1 eSalesObjMgr_enu 19948 Running<br />

156 siebelapp2_1 eSalesObjMgr_enu 19963 Running<br />

160 siebelapp2_1 eSalesObjMgr_enu 19953 Running<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 58


160 siebelapp2_1 eSalesObjMgr_enu 19958 Running<br />

169 siebelapp2_1 eServiceObjMgr_enu 19873 Running<br />

175 siebelapp2_1 eServiceObjMgr_enu 19868 Running<br />

178 siebelapp2_1 eServiceObjMgr_enu 19883 Running<br />

179 siebelapp2_1 eServiceObjMgr_enu 19878 Running<br />

179 siebelapp2_1 eServiceObjMgr_enu 19888 Running<br />

45 siebelapp2_1 SCCObjMgr_enu 19696 Running<br />

45 siebelapp2_1 SCCObjMgr_enu 19702 Running<br />

51 siebelapp2_1 SCCObjMgr_enu 19697 Running<br />

104 siebelapp2_1 SCCObjMgr_enu 19727 Running<br />

Run States are Running, Online, Shutting Down, Shutdown, and Unavailable.<br />

Run Tasks should be evenly distributed across servers and close to MaxTasks.<br />

9.4 To Find Out <strong>the</strong> Number of Active Servers for a Comp<strong>on</strong>ent<br />

Use <strong>the</strong> following server command to find out <strong>the</strong> number of active MTS Servers for a<br />

comp<strong>on</strong>ent:<br />

srvrmgr> list comp comp<strong>on</strong>ent for server servername show SV_NAME, CC_ALIAS,<br />

CP_ACTV_MTS_PROCS, CP_MAX_MTS_PROCS<br />

The number of active MTS servers should be close to <strong>the</strong> number of Max MTS servers.<br />

9.5 To Find Out <strong>the</strong> Tasks for a Comp<strong>on</strong>ent<br />

Use <strong>the</strong> following server command to find <strong>the</strong> tasks for a comp<strong>on</strong>ent:<br />

srvrmgr> list task for comp comp<strong>on</strong>ent server servername order by TK_TASKID<br />

Ordering by task id places <strong>the</strong> most recently started task at <strong>the</strong> bottom. It is a good sign<br />

if <strong>the</strong> most recently started tasks (that is, those started in <strong>the</strong> past few sec<strong>on</strong>ds or<br />

minutes) are still running. O<strong>the</strong>rwise, fur<strong>the</strong>r investigati<strong>on</strong> is required.<br />

9.6 To Set Detailed Trace Levels <strong>on</strong> <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Server Comp<strong>on</strong>ents<br />

(<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> OM)<br />

Log in to svrmgr and execute <strong>the</strong> following commands:<br />

Change evtloglvl taskcounters=4 for comp sccobjmgr<br />

change evtloglvl taskcounters=4 for comp eserviceobjmgr<br />

change evtloglvl taskevents=3 for comp sccobjmgr<br />

change evtloglvl taskevents=3 for comp eserviceobjmgr<br />

change evtloglvl mtwaring=2 for comp sccobjmgr<br />

change evtloglvl mtwaring=2 for comp eserviceobjmgr<br />

change evtloglvl set mtInfraTrace = True<br />

9.7 To Find Out <strong>the</strong> Number of GUEST Logins for a Comp<strong>on</strong>ent<br />

Use <strong>the</strong> following server command to find out <strong>the</strong> number of GUEST login for a<br />

comp<strong>on</strong>ent:<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 59


$ srvrmgr /g gateway /e enterprise /s server /u sadmin /p sadmin /c “list task<br />

for comp comp<strong>on</strong>ent” | grep Running | grep GUEST | wc -l<br />

9.8 To Calculate <strong>the</strong> Memory Usage for an OM<br />

1. We use <strong>the</strong> following script, pmem_sum.sh, to calculate <strong>the</strong> memory usage for an<br />

OM:<br />

#!/bin/sh<br />

if [ $# -eq 0 ]; <strong>the</strong>n<br />

echo "Usage: pmem_sum.sh "<br />

fi<br />

WHOAMI=`/usr/ucb/whoami`<br />

PIDS=`/usr/bin/ps -ef | grep $WHOAMI" " | grep $1 | grep -v "grep $1" | grep -v<br />

pmem_sum | \<br />

awk '{ print $2 }'`<br />

for pid in $PIDS<br />

do<br />

echo 'pmem process :' $pid<br />

pmem $pid > `uname -n`.$WHOAMI.pmem.$pid<br />

d<strong>on</strong>e<br />

pmem $PIDS | grep total | awk 'BEGIN { FS = " " } {print $1,$2,$3,$4,$5,$6}<br />

{tot+=$4} {shared+=$5} {private+=$6} END {print "Total memory used:", tot/1024 "M by<br />

"NR" procs. Total Private mem: "private/1024" M Total Shared mem: " shared/1024 "M<br />

Actual used memory:" ((private/1024)+(shared/1024/NR)) "M"}'<br />

2. To use it, type <strong>the</strong> following:<br />

pmem_sum.sh siebmtshmw<br />

9.9 To Find <strong>the</strong> Log File Associated with a Specific OM<br />

1. Check <strong>the</strong> server log file for <strong>the</strong> creati<strong>on</strong> of <strong>the</strong> multithreaded server process:<br />

ServerLog Startup 1 2003-03-19 19:00:46 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Applicati<strong>on</strong> Server<br />

is ready and awaiting requests<br />

…<br />

ServerLog ProcessCreate 1 2003-03-19 19:00:46 Created<br />

multithreaded server process (OS pid = 24796) for Call Center Object Manager<br />

(ENU) with task id 22535<br />

…<br />

2. The log file associated with <strong>the</strong> preceding OM is SCCObjMgr_enu_24796.log.<br />

1021 2003-03-19 19:48:04 2003-03-19 22:23:20 -0800 0000000d 001 001f 0001 09<br />

SCCObjMgr_enu 24796 24992 111<br />

/export/pspp/siebsrvr/enterprises/siebel2/siebelapp1/log/SCCObjMgr_enu_24796.log<br />

7.5.2.210 [16060] ENUENU<br />

…<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 60


9.10 To Produce a Stack Trace for <strong>the</strong> Current Thread of an OM<br />

1. Find out <strong>the</strong> current thread number that <strong>the</strong> OM is running (assuming <strong>the</strong> pid is<br />

24987):<br />

% > cat SCCObjMgr_enu_24987.log<br />

1021 2003-03-19 19:51:30 2003-03-19 22:19:41 -0800 0000000a 001 001f 0001 09<br />

SCCObjMgr_enu 24987 24982 93<br />

/export/pspp/siebsrvr/enterprises/siebel2/siebelapp1/log/SCCObjMgr_enu_24987.log<br />

7.5.2.210 [16060] ENUENU<br />

…<br />

2. The thread number for <strong>the</strong> preceding example is 93.<br />

3. Use pstack to produce <strong>the</strong> stack trace:<br />

$ > pstack 24987 | sed –n ’/lwp# 93/,/lwp# 94/p’<br />

----------------- lwp# 93 / thread# 93 --------------------<br />

7df44b7c lwp_mutex_lock (c00000c0)<br />

7df40dc4 mutex_lock_kernel (4ea73a00, 0, 7df581b8, 7df56000, 0, c00000c0) + c8<br />

7df41a64 mutex_lock_internal (4ea73a00, 7df581ac, 0, 7df56000, 1, 0) + 44c<br />

7e3c430c CloseHandle (11edc, 7e4933a8, c01f08c8, 7ea003e4, c1528, 4ea73a98) + a8<br />

7ea96958 __1cKCWinThread2T6M_v_ (7257920, 2, c1538, 1d, 0, 0) + 14<br />

7ea97768 __SLIP.DELETER__B (7257920, 1, 7ebc37c0, 7ea00294, dd8f8, 4a17f81c) + 4<br />

7ea965f0 __1cMAfxEndThread6FIi_v_ (7257ab8, 7257920, 0, 1, 1, 0) + 58<br />

7edd2c6c __1cVOSDSolarisThreadStart6Fpv_0_ (7aba9d0, 1, c01f08c8, 51ecd, 1, 0) +<br />

50<br />

7fb411bc __1cUWslThreadProcWrapper6Fpv_I_ (7aba9e8, 7e4933a8, c01f08c8,<br />

c01f08c8, 0, ffffffff) + 48<br />

7ea9633c __1cP_AfxThreadEntry6Fpv_I_ (51ecc, ffffffff, 1, 7ea9787c, 4000,<br />

4a17fe10) + 114<br />

7e3ca658 __1cIMwThread6Fpv_v_ (1, 7e4a6d00, 7e496400, c0034640, c01f0458,<br />

c01f08c8) + 2ac<br />

7df44970 _lwp_start (0, 0, 0, 0, 0, 0)<br />

----------------- lwp# 94 / thread# 94 --------------------<br />

9.11 To Show System-Wide Lock C<strong>on</strong>tenti<strong>on</strong> Issues Using lockstat<br />

1. You can use lockstat to find out a lot of things about lock c<strong>on</strong>tenti<strong>on</strong>s. One<br />

interesting questi<strong>on</strong> is, “What is <strong>the</strong> most c<strong>on</strong>tended lock in <strong>the</strong> system?”<br />

2. The following shows <strong>the</strong> system locks c<strong>on</strong>tenti<strong>on</strong> during <strong>on</strong>e of <strong>the</strong> 4600 user runs<br />

with large latch values and double ramp up time.<br />

# lockstat sleep 5<br />

Adaptive mutex spin: 17641 events in 4.998 sec<strong>on</strong>ds (3529 events/sec)<br />

Count indv cuml rcnt spin Lock Caller<br />

-------------------------------------------------------------------------------<br />

3403 19% 19% 1.00 51 0x30017e123e0 hmestart+0x1c8<br />

3381 19% 38% 1.00 130 service_queue background+0x130<br />

3315 19% 57% 1.00 136 service_queue background+0xdc<br />

2142 12% 69% 1.00 86 service_queue qenable_locked+0x38<br />

853 5% 74% 1.00 41 0x30017e123e0 hmeintr+0x2dc<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 61


…<br />

1 0% 100% 1.00 5 0x300267b75f0 lwp_unpark+0x60<br />

1 0% 100% 1.00 18 0x3001d9a79c8 background+0xb0<br />

----------------------------------------------------------------------------<br />

Adaptive mutex block: 100 events in 4.998 sec<strong>on</strong>ds (20 events/sec)<br />

Count indv cuml rcnt nsec Lock Caller<br />

----------------------------------------------------------------------------<br />

25 25% 25% 1.00 40179 0x30017e123e0 hmeintr+0x2dc<br />

8 8% 33% 1.00 765800 0x30017e123e0 hmestart+0x1c8<br />

6 6% 39% 1.00 102226 service_queue background+0xdc<br />

5 5% 44% 1.00 93376 service_queue background+0x130<br />

…<br />

1 1% 100% 1.00 74480 0x300009ab000 callout_execute+0x98<br />

----------------------------------------------------------------------------<br />

Spin lock spin: 18814 events in 4.998 sec<strong>on</strong>ds (3764 events/sec)<br />

Count indv cuml rcnt spin Lock Caller<br />

----------------------------------------------------------------------------<br />

2895 15% 15% 1.00 2416 sleepq_head+0x8d8 cv_signal+0x38<br />

557 3% 18% 1.00 1184 cpu[10]+0x78 disp_getbest+0xc<br />

486 3% 21% 1.00 1093 cpu[2]+0x78 disp_getbest+0xc<br />

…<br />

1 0% 100% 1.00 1001 turnstile_table+0xf68 turnstile_lookup+0x50<br />

1 0% 100% 1.00 1436 turnstile_table+0xbf8 turnstile_lookup+0x50<br />

1 0% 100% 1.00 1618 turnstile_table+0xc18 turnstile_lookup+0x50<br />

----------------------------------------------------------------------------<br />

Thread lock spin: 33 events in 4.998 sec<strong>on</strong>ds (7 events/sec)<br />

Count indv cuml rcnt spin Lock Caller<br />

----------------------------------------------------------------------------<br />

2 6% 6% 1.00 832 sleepq_head+0x8d8 setrun+0x4<br />

2 6% 12% 1.00 112 cpu[3]+0xb8 ts_tick+0xc<br />

2 6% 18% 1.00 421 cpu[8]+0x78 ts_tick+0xc<br />

…<br />

1 3% 97% 1.00 1 cpu[14]+0x78 turnstile_block+0x20c<br />

1 3% 100% 1.00 919 sleepq_head+0x328 ts_tick+0xc<br />

----------------------------------------------------------------------------<br />

R/W writer blocked by writer: 73 events in 4.998 sec<strong>on</strong>ds (15 events/sec)<br />

Count indv cuml rcnt nsec Lock Caller<br />

----------------------------------------------------------------------------<br />

8 11% 11% 1.00 100830 0x300274e5600 segvn_setprot+0x34<br />

5 7% 18% 1.00 87520 0x30029577508 segvn_setprot+0x34<br />

4 5% 23% 1.00 96020 0x3002744a388 segvn_setprot+0x34<br />

…<br />

1 1% 99% 1.00 152960 0x3001e296650 segvn_setprot+0x34<br />

1 1% 100% 1.00 246960 0x300295764e0 segvn_setprot+0x34<br />

----------------------------------------------------------------------------<br />

R/W writer blocked by readers: 40 events in 4.998 sec<strong>on</strong>ds (8 events/sec)<br />

Count indv cuml rcnt nsec Lock Caller<br />

----------------------------------------------------------------------------<br />

4 10% 10% 1.00 54860 0x300274e5600 segvn_setprot+0x34<br />

3 8% 18% 1.00 55733 0x3002744a388 segvn_setprot+0x34<br />

3 8% 25% 1.00 102240 0x3001c729668 segvn_setprot+0x34<br />

…<br />

1 2% 98% 1.00 48720 0x3002759b500 segvn_setprot+0x34<br />

1 2% 100% 1.00 46480 0x300295764e0 segvn_setprot+0x34<br />

----------------------------------------------------------------------------<br />

R/W reader blocked by writer: 52 events in 4.998 sec<strong>on</strong>ds (10 events/sec)<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 62


Count indv cuml rcnt nsec Lock Caller<br />

----------------------------------------------------------------------------<br />

5 10% 10% 1.00 131488 0x300274e5600 segvn_fault+0x38<br />

3 6% 15% 1.00 111840 0x3001a62b940 segvn_fault+0x38<br />

3 6% 21% 1.00 139253 0x3002792f2a0 segvn_fault+0x38<br />

…<br />

1 2% 98% 1.00 98400 0x3001e296650 segvn_fault+0x38<br />

1 2% 100% 1.00 100640 0x300295764e0 segvn_fault+0x38<br />

----------------------------------------------------------------------------<br />

Lockstat record failure: 5 events in 4.998 sec<strong>on</strong>ds (1 events/sec)<br />

Count indv cuml rcnt Lock Caller<br />

----------------------------------------------------------------------------<br />

5 100% 100% 0.00 lockstat_lock lockstat_record<br />

----------------------------------------------------------------------------<br />

9.12 To Show <strong>the</strong> Lock Statistic of an OM Using plockstat<br />

1. The syntax is: plockstat [ -o outfile ] -p pid. The program grabs a<br />

process and shows <strong>the</strong> lock statistics up<strong>on</strong> exit or interrupt.<br />

2. The following shows <strong>the</strong> lock statistics of an OM during <strong>on</strong>e of <strong>the</strong> 4600 user runs<br />

with large latch values and double ramp up time:<br />

$> plockstat -p 4027<br />

^C<br />

----------- mutex lock statistics -----------<br />

lock try_lock sleep avg sleep avg hold locati<strong>on</strong>:<br />

count count fail count time usec time usec name<br />

2149 0 0 1 5218 142 siebmtshmw: __envir<strong>on</strong>_lock<br />

2666 0 0 0 0 3 [heap]: 0x9ebd0<br />

948 0 0 0 0 1 [heap]: 0x9f490<br />

312 0 0 2 351 88 [heap]: 0x9f4c8<br />

447 0 0 0 0 2 [heap]: 0x9f868<br />

237 0 0 0 0 101 [heap]: 0x9f8a0<br />

2464 0 0 1 4469 2 [heap]: 0xa00f0<br />

1 0 0 0 0 11 [heap]: 0x17474bc0<br />

…<br />

219 0 0 0 0 2 libsscassmc: m_cacheLock+0x8<br />

41 41 0 0 0 2 0x79a2a828<br />

152295 0 0 15 11407 1 libthread: tdb_hash_lock<br />

2631 0 0 10 297603 468 libc: _time_lock<br />

1807525 0 0 16762 59752 14 libc: __malloc_lock<br />

----------- c<strong>on</strong>dvar statistics -----------<br />

cvwait avg sleep tmwait timout avg sleep signal brcast locati<strong>on</strong>:<br />

count time usec count count time usec count count name<br />

0 0 41 40 4575290 0 0 [heap]: 0x2feec30<br />

8 16413463 0 0 0 8 0 [heap]: 0x305fce8<br />

20 7506539 0 0 0 20 0 [heap]: 0x4fafbe8<br />

16 6845818 0 0 0 16 0 [heap]: 0x510a8d8<br />

…<br />

12 8960055 0 0 0 12 0 [heap]: 0x110f6138<br />

13 10375600 0 0 0 13 0 [heap]: 0x1113e040<br />

----------- readers/writer lock statistics -----------<br />

rdlock try_lock sleep avg sleep wrlock try_lock sleep avg sleep avg<br />

hold locati<strong>on</strong>:<br />

count count fail count time usec count count fail count time usec time<br />

usec name<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 63


382 0 0 0 0 0 0 0 0 0<br />

0 [heap]: 0x485c2c0<br />

102100 0 0 0 0 0 0 0 0 0<br />

0 libsscfdm: g_CTsharedLock<br />

9.13 To “Truss” an OM<br />

1. Modify siebmtshw:<br />

#!/bin/ksh<br />

. $MWHOME/setmwruntime<br />

MWUSER_DIRECTORY=${MWHOME}/system<br />

LD_LIBRARY_PATH=/usr/lib/lwp:${LD_LIBRARY_PATH}<br />

#exec siebmtshmw $@<br />

truss -l -o /tmp/$$.siebmtshmw.trc siebmtshmw $@<br />

2. After you start up <strong>the</strong> server, this wrapper creates <strong>the</strong> truss output in a file named<br />

pid.siebmtshmw.trc in /tmp.<br />

9.14 How to Trace <strong>the</strong> SQL Statements for a Particular <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

Transacti<strong>on</strong><br />

If resp<strong>on</strong>se times are high, or if you think that <strong>the</strong> database is a bottleneck, you can<br />

check how l<strong>on</strong>g <strong>the</strong> SQL queries are taking to execute by running a SQL trace <strong>on</strong> a<br />

LoadRunner script. The SQL trace is run through <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> and tracks all of <strong>the</strong> database<br />

activity and how l<strong>on</strong>g things take to execute. If executi<strong>on</strong> times are too high, <strong>the</strong>re is a<br />

problem with <strong>the</strong> database c<strong>on</strong>figurati<strong>on</strong>, which most likely is c<strong>on</strong>tributing to high<br />

resp<strong>on</strong>se times. To run a SQL Trace <strong>on</strong> a particular script, follow <strong>the</strong>se instructi<strong>on</strong>s:<br />

1. C<strong>on</strong>figure <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> envir<strong>on</strong>ment to <strong>the</strong> default settings (that is, <strong>the</strong> comp<strong>on</strong>ent<br />

should have <strong>on</strong>ly <strong>on</strong>e OM, and so <strong>on</strong>).<br />

2. Open <strong>the</strong> LoadRunner script in questi<strong>on</strong> in <strong>the</strong> Virtual User Generator.<br />

3. Place a breakpoint at <strong>the</strong> end of Acti<strong>on</strong> 1 and before Acti<strong>on</strong> 2. This will stop<br />

<strong>the</strong> user at <strong>the</strong> breakpoint.<br />

4. Run a user.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 64


5. Once <strong>the</strong> user has stopped at <strong>the</strong> breakpoint, enable SQL tracing via srvrmgr:<br />

change evtloglvl ObjMgrSqlLog=4 for comp <br />

6. Press Play, which will resume <strong>the</strong> user.<br />

7. Wait until <strong>the</strong> user has finished.<br />

8. Under <strong>the</strong> $SIEBEL_SERVER_HOME/enterprises//<br />

directory, <strong>the</strong>re will be a log for <strong>the</strong> comp<strong>on</strong>ent that is running. This log c<strong>on</strong>tains<br />

detailed informati<strong>on</strong> <strong>on</strong> <strong>the</strong> database activity, including how l<strong>on</strong>g <strong>the</strong> SQL queries<br />

took to execute. Search for high executi<strong>on</strong> times (greater than 0.01 sec<strong>on</strong>ds).<br />

9. Once you are d<strong>on</strong>e, disable SQL tracing via srvrmgr:<br />

change evtloglvl ObjMgrSqlLog=0 for comp <br />

9.15 Changing Database C<strong>on</strong>nect Stringvi $ODBCINI and Editing <strong>the</strong><br />

field .ServerName.<br />

1. srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <br />

2. At srvrmgr prompt: Verify and change <strong>the</strong> value of DSC<strong>on</strong>nectString<br />

parameter:<br />

i. List params for named subsystems serverdatasrc<br />

ii. Change param DSC<strong>on</strong>nectString= for named subsystems<br />

serverdatasrc.<br />

9.16 Enabling/Disabling <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Comp<strong>on</strong>ents<br />

Disabling a Comp<strong>on</strong>ent<br />

1. Bring up srvrmgr c<strong>on</strong>sole:<br />

srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <br />

2. Disable <strong>the</strong> comp<strong>on</strong>ent:<br />

disable comp <br />

3. List <strong>the</strong> comp<strong>on</strong>ents and verify <strong>the</strong>ir status:<br />

list comp<strong>on</strong>ents<br />

Enabling a Comp<strong>on</strong>ent<br />

Disabling a comp<strong>on</strong>ent may disable <strong>the</strong> comp<strong>on</strong>ent definiti<strong>on</strong>. We may need to<br />

enable <strong>the</strong> comp<strong>on</strong>ent definiti<strong>on</strong><br />

1. Bring up srvrmgr c<strong>on</strong>sole at enterprise level (just do not use "/s" switch).<br />

2. srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin<br />

3. Enable <strong>the</strong> comp<strong>on</strong>ent definiti<strong>on</strong>:<br />

enable compdef <br />

(Note: This enables <strong>the</strong> comp<strong>on</strong>ent definiti<strong>on</strong> at all active servers; you may need to<br />

disable <strong>the</strong> comp<strong>on</strong>ent at servers where you d<strong>on</strong>'t need <strong>the</strong> comp<strong>on</strong>ent.)<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 65


4. Bring up srvrmgr c<strong>on</strong>sole at server level:<br />

srvrmgr /g siebgateway /e siebel /u sadmin /p sadmin /s <br />

5. Enable <strong>the</strong> comp<strong>on</strong>ent definiti<strong>on</strong> at server level:<br />

enable compdef <br />

6. Bounce gateway and all active servers.<br />

Note: Sometimes <strong>the</strong> comp<strong>on</strong>ent may not be enabled even after following <strong>the</strong>se steps. In<br />

such cases you may need to enable <strong>the</strong> comp<strong>on</strong>ent group at enterprise level before<br />

enabling <strong>the</strong> actual comp<strong>on</strong>ent:<br />

enable compgrp <br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 66


10 Appendix A: Transacti<strong>on</strong> Resp<strong>on</strong>se Times<br />

These are <strong>the</strong> different transacti<strong>on</strong>s executed by <strong>the</strong> OLTP workload of 10,000 <str<strong>on</strong>g>Siebel</str<strong>on</strong>g><br />

users. The collecti<strong>on</strong> here is an average of all 10,000 users in steady state of 1 hour,<br />

executing multiple iterati<strong>on</strong>s with 30 sec<strong>on</strong>d average think time between each<br />

transacti<strong>on</strong>. The average of all of <strong>the</strong> average transacti<strong>on</strong>s was 0.167 sec<strong>on</strong>ds.<br />

Transacti<strong>on</strong> Name Min Average Max<br />

eSvc2_Query_Branch 0.094 0.14 3.703<br />

eSvc2_Mail_And_Fax 0.063 0.085 3.156<br />

eSvc2_Login_Page 0.203 0.324 4.578<br />

eSvc2_Email 0.078 0.193 5.563<br />

eSvc2_C<strong>on</strong>tact_Us_3 0.063 0.126 3.859<br />

eSvc2_C<strong>on</strong>tact_Us 0.047 0.064 4.313<br />

eSvc2_Branch_Locator 0.031 0.042 3.422<br />

Eservice2 Avg 0.13914286<br />

eSvc1_Query_Product 0.031 0.052 2.672<br />

eSvc1_Product_Butt<strong>on</strong> 0.016 0.033 0.125<br />

eSvc1_OK_Product 0.016 0.025 0.266<br />

eSvc1_My_SR 0.047 0.084 4.031<br />

eSvc1_My_Account 0.063 0.088 3.813<br />

eSvc1_Logout 0.031 0.093 3.609<br />

eSvc1_Login_Page 0.203 0.351 3.984<br />

eSvc1_Login 0.266 0.446 3.75<br />

eSvc1_Enter_Info_And_Submit 0.047 0.098 3.453<br />

eSvc1_Drilldown_Submit_SR 0.063 0.168 4.344<br />

eSvc1_Drilldown_SR 0.078 0.115 2.969<br />

eSvc1_C<strong>on</strong>tinue 0.047 0.086 3.359<br />

eSrevice2_C<strong>on</strong>tact_Us_2 0.031 0.06 3.797<br />

0.13069231<br />

eSls3_338LogOut 0.063 0.1 0.563<br />

eSls3_337UserProfile 0.031 0.049 0.156<br />

eSls3_336MyAccount 0.094 0.13 1.438<br />

eSls3_335DDOrder 0.141 0.239 2.922<br />

eSls3_334MyOrders 0.094 0.128 0.203<br />

eSls3_333MyAccount 0.094 0.125 0.859<br />

eSls3_332C<strong>on</strong>firmOrder 0.531 0.676 1.625<br />

eSls3_331C<strong>on</strong>tinue 0.266 0.428 2.484<br />

eSls3_330EditShippingDeatils 0.078 0.113 2.016<br />

eSls3_329EditShipping 0.078 0.098 0.25<br />

eSls3_328EnterCreditInfo 0.406 0.589 2.875<br />

eSls3_327CheckOut 0.219 0.341 2.578<br />

eSls3_326GotoCart 0.203 0.243 0.625<br />

eSls3_325ChangeQuantAddtocart 0.281 0.453 3.172<br />

eSls3_324DDproduct 0.406 0.503 3.328<br />

eSls3_323BackFromcompare 0 0 0<br />

eSls3_322Pick4ProdCompare 0.344 0.413 3.125<br />

eSls3_321Color_RedSize_MedManuf_Acme 0.531 0.6 0.797<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 67


eSls3_320SelectProdFamily 0.109 0.181 2.281<br />

eSls3_319ParamSearch 0.078 0.101 0.328<br />

eSls3_318Home 0.188 0.251 3.203<br />

eSls3_317AddtoCart 0.203 0.283 3.438<br />

eSls3_316DDProdBabyHoodPolice 0.125 0.148 0.219<br />

eSls3_315NextRecs 0.094 0.097 0.172<br />

eSls3_314NextRecs 0.094 0.141 2.828<br />

eSls3_313DDSubCategory1.1 0.172 0.226 3.031<br />

eSls3_312DDCategory1 0.125 0.178 2.609<br />

eSls3_311CatalogTab 0.141 0.17 0.359<br />

eSls3_310AddtoCart 0.281 0.402 2.5<br />

eSls3_309DDProdBabyHood 0.109 0.134 0.266<br />

eSls3_308NextRecs 0.078 0.097 0.188<br />

eSls3_307NextRecs 0.094 0.1 0.516<br />

eSls3_306DDSubCategory1.1 0.156 0.196 2.047<br />

eSls3_305DDCategory1 0.109 0.144 0.375<br />

eSls3_304CatalogTab 0.109 0.146 1.875<br />

eSls3_302Login 0.406 0.595 3.922<br />

eSls3_301ClickLogin 0.047 0.073 0.156<br />

eSls3_300StartApp 0.25 0.373 3.313<br />

0.24378947<br />

eSls2_224LogOut 0.047 0.119 2.734<br />

eSls2_223EmptyCart 0.063 0.129 2.203<br />

eSls2_222GotoCart 0.141 0.237 2.734<br />

eSls2_221AddtoCart 0.188 0.3 3.172<br />

eSls2_220DDProdBabyHood 0.109 0.147 1.453<br />

eSls2_219NextRecs 0.094 0.111 2.719<br />

eSls2_218NextRecs 0.094 0.106 2.375<br />

eSls2_217DDSubCategory1.2 0.156 0.189 3.391<br />

eSls2_216DDCategory1 0.328 0.387 2.484<br />

eSls2_215CatalogTab 0.109 0.187 3.547<br />

eSls2_214Addressbook 0.031 0.054 2.031<br />

eSls2_213MyAccount 0.094 0.111 1.344<br />

eSls2_212UserProfile 0.031 0.056 2.563<br />

eSls2_211MyAccount 0.094 0.127 2.922<br />

eSls2_210MyOrders 0.078 0.119 2.719<br />

eSls2_209MyAccount 0.125 0.165 3.172<br />

eSls2_208AddtoCart 0.203 0.427 3.219<br />

eSls2_207DDProdBing 0.125 0.176 3.672<br />

eSls2_206NextRecs 0.078 0.104 3.672<br />

eSls2_205NextRecs 0.078 0.105 3.578<br />

eSls2_204DDSubCategory1.3 0.141 0.155 1.641<br />

eSls2_203DDCategory1 0.109 0.159 3.422<br />

eSls2_202Login 0.375 0.536 3.641<br />

eSls2_201ClickLogin 0.047 0.077 2.125<br />

eSls2_200StartApp 0.234 0.331 4.063<br />

0.18456<br />

eSls1_118DDProdBingle 0.125 0.163 3.297<br />

eSls1_117NextRecs 0.078 0.096 2.922<br />

eSls1_116NextRecs 0.078 0.101 3.109<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 68


eSls1_115DDSubCategory1.3 0.125 0.157 3.469<br />

eSls1_114DDCategory1 0.109 0.16 3.844<br />

eSls1_113CatalogTab 0.125 0.149 3.266<br />

eSls1_112DDProdBinge 0.125 0.161 3.859<br />

eSls1_111NextRecs 0.078 0.099 3.469<br />

eSls1_110NextRecs 0.078 0.106 3.438<br />

eSls1_109DDSubCategory1.3 0.125 0.155 3.531<br />

eSls1_108DDCategory1 0.109 0.16 3.766<br />

eSls1_107CatalogTab 0.109 0.149 3.047<br />

eSls1_106DDProdBing 0.125 0.159 3.266<br />

eSls1_105NextRecs 0.078 0.106 3.297<br />

eSls1_104NextRecs 0.078 0.103 3.203<br />

eSls1_103DDSubCategory1.3 0.141 0.163 3.5<br />

eSls1_102DDCategory1 0.109 0.164 3.703<br />

eSls1_101CatalogTab 0.109 0.134 3.469<br />

eSls1_100StartApp 0.219 0.348 3.594<br />

0.14910526<br />

eChannel3_ResetStates 0 0.006 1.969<br />

eChannel3_GoToSRSoluti<strong>on</strong>View 0.094 0.152 5.766<br />

eChannel3_GoToServiceTab 0.156 0.246 6.328<br />

eChannel3_ExecuteQueryBySRNum 0.078 0.152 4.547<br />

eChannel3_EnterDetailsAndSave 0.219 0.388 6.656<br />

eChannel3_DrilldownSR 0.125 0.218 6.469<br />

eChannel3_CreateNewActivity 0.078 0.124 4.875<br />

eChannel3_ClickQueryButt<strong>on</strong> 0.094 0.166 4.313<br />

eChannel3_ChangeStatusSubStatusAndSave 0.188 0.28 6.188<br />

0.19244444<br />

eChannel2_132_SaveQuote 0.156 0.296 8.422<br />

eChannel2_131_NewQuote 0.125 0.185 4.172<br />

eChannel2_130_OpportunityQuoteView 0.125 0.2 3.813<br />

eChannel2_129_OpportunityAttachmentView 0.109 0.175 4.125<br />

eChannel2_128_SaveActivity 0.125 0.257 4.25<br />

eChannel2_127_NewActivity 0.109 0.15 2.141<br />

eChannel2_126_OpportunityActivityView 0.125 0.192 4.25<br />

eChannel2_125_PickSalesTeam 0.141 0.271 5.828<br />

eChannel2_124_NewSalesTeam 0.047 0.085 3.875<br />

eChannel2_123_OpportunitySalesTeamView 0.109 0.193 3.781<br />

eChannel2_122_SaveProduct2 0.203 0.345 4.641<br />

eChannel2_121_PickProduct2 0.094 0.132 2.203<br />

eChannel2_120_QueryForProduct2 0.031 0.117 2.969<br />

eChannel2_119_ClickOnProductField2 0.016 0.021 0.484<br />

eChannel2_118_NewProduct2 0.094 0.153 4.703<br />

eChannel2_117_SaveProduct1 0.203 0.364 6.141<br />

eChannel2_116_PickProduct1 0.094 0.137 1.688<br />

eChannel2_115_QueryForProduct1 0.031 0.061 2.5<br />

eChannel2_114_ClickOnProductField 0.031 0.037 1.656<br />

eChannel2_113_NewProduct1 0.094 0.139 4.125<br />

eChannel2_112_OpptyProductView 0.109 0.175 3.984<br />

eChannel2_111_SaveC<strong>on</strong>tact 0.156 0.293 6.469<br />

eChannel2_110_NewC<strong>on</strong>tact 0.156 0.305 5.578<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 69


eChannel2_109_NewC<strong>on</strong>tactOppty 0.078 0.118 2.719<br />

eChannel2_108_DrilldownOnOppty 0.156 0.267 5.094<br />

eChannel2_107_SaveOppty 0.172 0.341 5.891<br />

eChannel2_106_PickAccount 0.125 0.22 5.516<br />

eChannel2_105_QueryForAccount 0.031 0.071 2.813<br />

eChannel2_104_ClickOnQueryForAccount 0.016 0.02 0.578<br />

eChannel2_103_ClickOnAccountField 0.109 0.189 5.453<br />

eChannel2_102_NewOppty 0.391 0.706 8.313<br />

eChannel2_101_OpportunityScreen 0.156 0.264 5.844<br />

eChannel2_100_ResetStates 0 0.014 0.359<br />

0.19675758<br />

eChannel1_SearchForC<strong>on</strong>tact 0.047 0.087 2.953<br />

eChannel1_SearchForAccount 0.047 0.09 4.234<br />

eChannel1_SaveSR 0.094 0.19 4.078<br />

eChannel1_SaveC<strong>on</strong>tact 0.109 0.216 4.031<br />

eChannel1_SaveAddress 0.078 0.174 5.234<br />

eChannel1_SaveActivity 0.109 0.217 4.828<br />

eChannel1_SaveAccount 0.141 0.276 4.797<br />

eChannel1_ResetStates 0 0.005 0.469<br />

eChannel1_LookInC<strong>on</strong>tact 0.016 0.029 0.703<br />

eChannel1_LookInAccount 0.031 0.042 1.719<br />

eChannel1_GoC<strong>on</strong>tacts 0.219 0.326 7.375<br />

eChannel1_GoAccounts 0.156 0.25 6.547<br />

eChannel1_DrilldownOnAccount 0.141 0.303 6.172<br />

eChannel1_CreateSR 0.094 0.153 4.563<br />

eChannel1_CreateC<strong>on</strong>tact 0.063 0.118 3.156<br />

eChannel1_CreateAddress 0.063 0.085 2.359<br />

eChannel1_CreateActivity 0.094 0.128 2.766<br />

eChannel1_CreateAccount 0.078 0.115 5.734<br />

eChannel1_CloseFind 0.078 0.114 2.328<br />

eChannel1_ClickNewInC<strong>on</strong>tactMVG 0.109 0.236 5.25<br />

eChannel1_ClickFind 0.109 0.168 4.578<br />

eChannel1_AccountTeamView 0.109 0.15 3.109<br />

eChannel1_AccountSRView 0.109 0.169 3.531<br />

eChannel1_AccountRevenueView 0.109 0.169 4.547<br />

eChannel1_AccountQuoteView 0.109 0.186 4.391<br />

eChannel1_AccountOrderView 0.109 0.19 3.953<br />

eChannel1_AccountAssetView 0.094 0.16 4.5<br />

eChannel1_AccountAddressView 0.094 0.152 4.234<br />

eChannel1_AccountActivityView 0.109 0.165 3.375<br />

0.1607931<br />

CC3_2soluti<strong>on</strong>View 0.047 0.092 4.844<br />

CC3_2soluti<strong>on</strong>SRview 0.031 0.058 2.781<br />

CC3_2setStatusAndSaveSR 0.156 0.284 5.938<br />

CC3_2setStatusAndSaveActivity 0.094 0.132 4.844<br />

CC3_2serviceScreen 0.078 0.123 4.172<br />

CC3_2saveSoluti<strong>on</strong> 0.063 0.136 4.438<br />

CC3_2ResetStates 0 0.005 0.641<br />

CC3_2relatedSRview 0.063 0.116 5.063<br />

CC3_2newSoluti<strong>on</strong>InMVG 0.047 0.115 5.156<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 70


CC3_2newSoluti<strong>on</strong> 0.047 0.136 3.281<br />

CC3_2goSoluti<strong>on</strong> 0.047 0.096 4.609<br />

CC3_2backToActivitiesView 0.047 0.075 3.688<br />

CC3_2activitiesView 0.078 0.121 4.781<br />

0.11453846<br />

CC2_2serviceScreen 0.172 0.259 5.391<br />

CC2_2searchSR 0.047 0.088 4.25<br />

CC2_2saveSR 0.172 0.339 6.563<br />

CC2_2saveActivityPlan 0.563 0.931 7.984<br />

CC2_2ResetStates 0 0.004 0.469<br />

CC2_2queryC<strong>on</strong>tact 0.047 0.067 3.422<br />

CC2_2productPicklist 0.047 0.134 2.156<br />

CC2_2OpenBinocular 0.031 0.034 1.078<br />

CC2_2okProduct 0.016 0.031 2.859<br />

CC2_2okC<strong>on</strong>tact 0.047 0.098 5.328<br />

CC2_2newSR 0.031 0.064 4.5<br />

CC2_2newActivityPlan 0.031 0.054 3.688<br />

CC2_2goProduct 0.063 0.088 3.984<br />

CC2_2goC<strong>on</strong>tact 0.063 0.121 5.375<br />

CC2_2c<strong>on</strong>tactMVG 0.109 0.226 5.734<br />

CC2_2closeBinocular 0 0.002 0.594<br />

CC2_2activityPlanView 0.047 0.1 4.5<br />

0.15529412<br />

CC1_ResetStates 0 0.005 0.594<br />

CC1_183_NavigateBackToOpptyQuoteView 0.141 0.213 5.453<br />

CC1_182_SaveSalesOrder 0.063 0.135 4.141<br />

CC1_181_NewSalesOrder 0.359 0.617 6.344<br />

CC1_180_QuoteOrderView 0.25 0.351 6.766<br />

CC1_179_UpdateOppty 0.141 0.258 6.359<br />

CC1_178_Reprice 0.031 0.055 3.234<br />

CC1_177_SelectDiscountAndSaveQuote 0.172 0.288 5.516<br />

CC1_176_PickPriceList 0.031 0.067 3.328<br />

CC1_175_GoQueryForPriceList 0.063 0.106 4.141<br />

CC1_174_QueryForPriceList 0.047 0.076 2.609<br />

CC1_173_BringUpPriceListMVG 0.297 0.423 4.203<br />

CC1_172_DrilldownOnQuote 0.281 0.403 5.797<br />

CC1_171_SaveQuote 0.094 0.208 6.406<br />

CC1_170_AutoQuote 0.141 0.293 6.313<br />

CC1_169_OpptyQuoteView 0.063 0.12 4.391<br />

CC1_168_SaveProduct2 0.063 0.131 3.781<br />

CC1_167_PickProduct2 0.016 0.033 1.016<br />

CC1_166_GoQueryForProduct2 0.063 0.112 4.172<br />

CC1_165_QueryForProduct2 0.031 0.116 1.25<br />

CC1_164_NewProduct2 0.016 0.035 1.281<br />

CC1_163_SaveProduct1 0.063 0.137 3.875<br />

CC1_162_PickProduct1 0.016 0.028 3.406<br />

CC1_161_GoQueryForProduct1 0.063 0.234 5<br />

CC1_160_QueryForProduct1 0.047 0.145 2.266<br />

CC1_159_NewProduct1 0.016 0.041 3.156<br />

CC1_158_OpptyProductView 0.047 0.093 3.969<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 71


CC1_157_DrilldownOnOppty 0.172 0.256 4.797<br />

CC1_156_SaveOppty 0.109 0.236 4.656<br />

CC1_155_PickAccount 0.047 0.082 3.797<br />

CC1_154_GoQueryForAccount 0.078 0.137 5.125<br />

CC1_153_QueryForAccount 0.25 0.381 5.25<br />

CC1_152_BringUpAccountMVG 0.109 0.229 2.891<br />

CC1_151_NewInOpptyMVG 0.359 0.599 7.375<br />

CC1_150_NewOppty 0.078 0.209 4.063<br />

CC1_149_C<strong>on</strong>tactOpptyView 0.063 0.115 3.438<br />

CC1_148_SaveC<strong>on</strong>tact 0.078 0.176 4.969<br />

CC1_147_NewC<strong>on</strong>tact 0.016 0.031 4.328<br />

CC1_146_C<strong>on</strong>tactScreen 0.125 0.204 5.141<br />

CC1_145_CloseFind 0 0.002 0.547<br />

CC1_144_SearchForC<strong>on</strong>tact 0.031 0.062 3.75<br />

CC1_143_ClickFind 0.031 0.04 1.328<br />

cc1 avg 0.17814286<br />

total avg 0.16775095<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 72


11 Appendix B: Database Objects Growth During <strong>the</strong> Test<br />

GROWTH IN<br />

SIEBEL OBJECT NAME TYPE BYTES<br />

S_DOCK_TXN_LOG TABLE<br />

1,177,673,72<br />

8<br />

S_EVT_ACT TABLE 190,341,120<br />

S_DOCK_TXN_LOG_P1 INDEX 96,116,736<br />

S_DOCK_TXN_LOG_F1 INDEX 52,600,832<br />

S_ACT_EMP TABLE 46,202,880<br />

S_SRV_REQ TABLE 34,037,760<br />

S_AUDIT_ITEM TABLE 29,818,880<br />

S_OPTY_POSTN TABLE 28,180,480<br />

S_ACT_EMP_M1 INDEX 25,600,000<br />

S_EVT_ACT_M1 INDEX 23,527,424<br />

S_EVT_ACT_M5 INDEX 22,519,808<br />

S_ACT_EMP_U1 INDEX 21,626,880<br />

S_ACT_CONTACT TABLE 21,135,360<br />

S_EVT_ACT_U1 INDEX 18,391,040<br />

S_ACT_EMP_M3 INDEX 16,850,944<br />

S_EVT_ACT_M9 INDEX 16,670,720<br />

S_EVT_ACT_M7 INDEX 16,547,840<br />

S_AUDIT_ITEM_M2 INDEX 16,277,504<br />

S_ACT_EMP_P1 INDEX 15,187,968<br />

S_AUDIT_ITEM_M1 INDEX 14,131,200<br />

S_EVT_ACT_F9 INDEX 13,852,672<br />

S_ACT_CONTACT_U1 INDEX 13,361,152<br />

S_ORDER_ITEM TABLE 13,066,240<br />

S_REVN TABLE 12,943,360<br />

S_CONTACT TABLE 12,779,520<br />

S_SRV_REQ_M7 INDEX 12,492,800<br />

S_SRV_REQ_M2 INDEX 11,960,320<br />

S_ACT_EMP_F1 INDEX 11,804,672<br />

S_SRV_REQ_U2 INDEX 10,731,520<br />

S_SRV_REQ_U1 INDEX 10,444,800<br />

S_DOC_QUOTE TABLE 10,321,920<br />

S_QUOTE_ITEM TABLE 9,666,560<br />

S_SRV_REQ_M9 INDEX 8,970,240<br />

S_ACT_CONTACT_F2 INDEX 8,716,288<br />

S_OPTY TABLE 8,396,800<br />

S_ACT_CONTACT_P1 INDEX 8,183,808<br />

S_AUDIT_ITEM_F2 INDEX 7,987,200<br />

S_ORDER TABLE 7,987,200<br />

S_SRV_REQ_F13 INDEX 7,905,280<br />

S_SRV_REQ_P1 INDEX 7,872,512<br />

S_SRV_REQ_M10 INDEX 7,823,360<br />

S_RESITEM TABLE 7,798,784<br />

S_AUDIT_ITEM_P1 INDEX 7,634,944<br />

S_REVN_U1 INDEX 7,454,720<br />

S_SRV_REQ_M8 INDEX 7,331,840<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 73


S_SRV_REQ_M3 INDEX 7,208,960<br />

S_SRV_REQ_F6 INDEX 7,135,232<br />

S_SRV_REQ_F1 INDEX 7,086,080<br />

S_REVN_M1 INDEX 7,004,160<br />

S_OPTY_U1 INDEX 6,676,480<br />

S_REVN_U2 INDEX 6,676,480<br />

S_EVT_ACT_F11 INDEX 6,602,752<br />

S_OPTY_TERR TABLE 6,569,984<br />

S_SRV_REQ_M6 INDEX 6,471,680<br />

S_SRV_REQ_F2 INDEX 5,611,520<br />

S_DOC_ORDER TABLE 4,972,544<br />

S_SR_RESITEM TABLE 4,972,544<br />

S_ORG_EXT TABLE 4,833,280<br />

S_ACCNT_POSTN TABLE 4,341,760<br />

S_OPTY_CON TABLE 4,136,960<br />

S_DOC_QUOTE_BU TABLE 4,096,000<br />

S_PARTY TABLE 4,096,000<br />

S_SRV_REQ_M5 INDEX 4,055,040<br />

S_POSTN_CON TABLE 3,932,160<br />

S_SRV_REQ_F7 INDEX 3,768,320<br />

S_SRV_REQ_M4 INDEX 3,563,520<br />

S_OPTY_BU TABLE 3,194,880<br />

S_REVN_M3 INDEX 3,162,112<br />

S_CONTACT_M13 INDEX 3,153,920<br />

S_OPTY_U2 INDEX 3,072,000<br />

S_REVN_U3 INDEX 3,031,040<br />

S_OPTY_BU_M9 INDEX 2,990,080<br />

S_OPTY_BU_P1 INDEX 2,949,120<br />

S_PARTY_M2 INDEX 2,949,120<br />

S_ORDER_BU_M2 INDEX 2,867,200<br />

S_OPTY_BU_U1 INDEX 2,744,320<br />

S_ORG_EXT_F1 INDEX 2,629,632<br />

S_CONTACT_M11 INDEX 2,621,440<br />

S_CONTACT_M21 INDEX 2,621,440<br />

S_CONTACT_M14 INDEX 2,621,440<br />

S_PARTY_M3 INDEX 2,621,440<br />

S_CONTACT_F6 INDEX 2,580,480<br />

S_OPTY_V2 INDEX 2,580,480<br />

S_RESITEM_M4 INDEX 2,547,712<br />

S_REVN_F6 INDEX 2,539,520<br />

S_ORDER_M5 INDEX 2,498,560<br />

S_PARTY_M4 INDEX 2,498,560<br />

S_REVN_M2 INDEX 2,498,560<br />

S_POSTN_CON_M1 INDEX 2,498,560<br />

S_CONTACT_M12 INDEX 2,416,640<br />

S_OPTY_BU_M1 INDEX 2,416,640<br />

S_CONTACT_M22 INDEX 2,416,640<br />

S_OPTY_BU_M2 INDEX 2,334,720<br />

S_OPTY_BU_M5 INDEX 2,334,720<br />

S_OPTY_BU_M6 INDEX 2,334,720<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 74


S_OPTY_BU_M8 INDEX 2,334,720<br />

S_ORDER_POSTN TABLE 2,334,720<br />

S_OPTY_BU_M7 INDEX 2,334,720<br />

S_CONTACT_M9 INDEX 2,293,760<br />

S_CONTACT_X TABLE 2,293,760<br />

S_EVT_ACT_M8 INDEX 2,252,800<br />

S_OPTY_BU_M4 INDEX 2,211,840<br />

S_RESITEM_U2 INDEX 2,138,112<br />

S_RESITEM_M5 INDEX 2,097,152<br />

S_RESITEM_M6 INDEX 2,097,152<br />

S_ORDER_BU TABLE 2,088,960<br />

S_REVN_F3 INDEX 2,088,960<br />

S_DOC_QUOTE_U1 INDEX 2,048,000<br />

S_RESITEM_U1 INDEX 2,023,424<br />

S_OPTY_BU_M3 INDEX 1,966,080<br />

S_REVN_P1 INDEX 1,966,080<br />

S_REVN_F4 INDEX 1,966,080<br />

S_DOC_QUOTE_U2 INDEX 1,925,120<br />

S_SR_RESITEM_U1 INDEX 1,892,352<br />

S_POSTN_CON_M2 INDEX 1,843,200<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 75


12 Appendix C: Oracle statspack Report<br />

This is <strong>on</strong>e of <strong>the</strong> Oracle statspack reports collected during <strong>the</strong> <strong>on</strong>e-hour steady state<br />

of <strong>the</strong> test. This report provides an efficient way to identify where <strong>the</strong> time is spent inside<br />

<strong>the</strong> Oracle database during <strong>the</strong> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> tests.<br />

STATSPACK report for<br />

DB Name DB Id Instance Inst Num Release Cluster Host<br />

------------ ----------- ------------ -------- ----------- ------- ------------<br />

ORAMST 3597051609 oramst 1 9.2.0.2.0 NO siebdb<br />

Snap Id Snap Time Sessi<strong>on</strong>s Curs/Sess Comment<br />

------- ------------------ -------- --------- -------------------<br />

Begin Snap: 7 14-Aug-04 00:29:10 1,043 57.2 5000-and-server-use<br />

End Snap: 8 14-Aug-04 01:52:31 639 77.8 5000-and-server-use<br />

Elapsed:<br />

83.35 (mins)<br />

Cache Sizes (end)<br />

~~~~~~~~~~~~~~~~~<br />

Buffer Cache: 2,912M Std Block Size: 8K<br />

Shared Pool Size: 480M Log Buffer: 10,240K<br />

Load Profile<br />

~~~~~~~~~~~~ Per Sec<strong>on</strong>d Per Transacti<strong>on</strong><br />

--------------- ---------------<br />

Redo size: 874,784.17 7,640.12<br />

Logical reads: 95,274.57 832.10<br />

Block changes: 5,344.32 46.68<br />

Physical reads: 1,632.28 14.26<br />

Physical writes: 632.79 5.53<br />

User calls: 3,171.71 27.70<br />

Parses: 1,243.43 10.86<br />

Hard parses: 0.10 0.00<br />

Sorts: 385.85 3.37<br />

Log<strong>on</strong>s: 0.92 0.01<br />

Executes: 1,662.82 14.52<br />

Transacti<strong>on</strong>s: 114.50<br />

% Blocks changed per Read: 5.61 Recursive Call %: 22.22<br />

Rollback per transacti<strong>on</strong> %: 0.22 Rows per Sort: 18.68<br />

Instance Efficiency Percentages (Target 100%)<br />

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br />

Buffer Nowait %: 99.99 Redo NoWait %: 100.00<br />

Buffer Hit %: 98.64 In-memory Sort %: 100.00<br />

Library Hit %: 100.02 Soft Parse %: 99.99<br />

Execute to Parse %: 25.22 Latch Hit %: 99.95<br />

Parse CPU to Parse Elapsd %: 95.76 % N<strong>on</strong>-Parse CPU: 97.30<br />

Shared Pool Statistics Begin End<br />

------ ------<br />

Memory Usage %: 67.68 68.34<br />

% SQL with executi<strong>on</strong>s>1: 37.08 33.60<br />

% Memory for SQL w/exec>1: 62.79 58.60<br />

Top 5 Timed Events<br />

~~~~~~~~~~~~~~~~~~ % Total<br />

Event Waits Time (s) Ela Time<br />

-------------------------------------------- ------------ ----------- --------<br />

CPU time 11,550 72.04<br />

log file sync 573,210 2,143 13.37<br />

direct path read 280,947 627 3.91<br />

direct path write 154,781 533 3.32<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 76


log file parallel write 550,101 515 3.21<br />

-------------------------------------------------------------<br />

Wait Events for DB: ORAMST Instance: oramst Snaps: 7 -8<br />

-> s - sec<strong>on</strong>d<br />

-> cs - centisec<strong>on</strong>d - 100th of a sec<strong>on</strong>d<br />

-> ms - millisec<strong>on</strong>d - 1000th of a sec<strong>on</strong>d<br />

-> us - microsec<strong>on</strong>d - 1000000th of a sec<strong>on</strong>d<br />

-> ordered by wait time desc, waits desc (idle events last)<br />

Avg<br />

Total Wait wait Waits<br />

Event Waits Timeouts Time (s) (ms) /txn<br />

---------------------------- ------------ ---------- ---------- ------ --------<br />

log file sync 573,210 283 2,143 4 1.0<br />

direct path read 280,947 0 627 2 0.5<br />

direct path write 154,781 0 533 3 0.3<br />

log file parallel write 550,101 546,565 515 1 1.0<br />

db file sequential read 6,451,104 0 294 0 11.3<br />

db file parallel write 27,046 0 152 6 0.0<br />

SQL*Net more data to client 2,006,251 0 152 0 3.5<br />

c<strong>on</strong>trol file parallel write 1,639 0 38 23 0.0<br />

enqueue 8,971 0 10 1 0.0<br />

buffer busy waits 37,083 0 9 0 0.1<br />

latch free 5,753 5,685 6 1 0.0<br />

log file switch completi<strong>on</strong> 51 0 3 54 0.0<br />

db file scattered read 3,064 0 1 0 0.0<br />

LGWR wait for redo copy 13,244 0 1 0 0.0<br />

c<strong>on</strong>trol file sequential read 743 0 0 0 0.0<br />

SQL*Net break/reset to clien 28 0 0 1 0.0<br />

log file single write 4 0 0 1 0.0<br />

buffer deadlock 211 211 0 0 0.0<br />

log file sequential read 4 0 0 0 0.0<br />

SQL*Net message from client 13,127,091 0 4,983,503 380 22.9<br />

SQL*Net more data from clien 2,197,829 0 285 0 3.8<br />

SQL*Net message to client 13,126,685 0 22 0 22.9<br />

-------------------------------------------------------------<br />

Background Wait Events for DB: ORAMST Instance: oramst Snaps: 7 -8<br />

-> ordered by wait time desc, waits desc (idle events last)<br />

Avg<br />

Total Wait wait Waits<br />

Event Waits Timeouts Time (s) (ms) /txn<br />

---------------------------- ------------ ---------- ---------- ------ --------<br />

log file parallel write 550,105 546,569 515 1 1.0<br />

db file parallel write 27,046 0 152 6 0.0<br />

c<strong>on</strong>trol file parallel write 1,639 0 38 23 0.0<br />

log file sync 825 0 2 2 0.0<br />

LGWR wait for redo copy 13,244 0 1 0 0.0<br />

db file scattered read 176 0 0 2 0.0<br />

SQL*Net more data to client 3,320 0 0 0 0.0<br />

db file sequential read 995 0 0 0 0.0<br />

c<strong>on</strong>trol file sequential read 691 0 0 0 0.0<br />

enqueue 12 0 0 1 0.0<br />

direct path write 54 0 0 0 0.0<br />

log file single write 4 0 0 1 0.0<br />

buffer busy waits 24 0 0 0 0.0<br />

direct path read 54 0 0 0 0.0<br />

log file sequential read 4 0 0 0 0.0<br />

rdbms ipc message 1,186,256 554,100 28,351 24 2.1<br />

SQL*Net message from client 9,948 0 4,869 489 0.0<br />

sm<strong>on</strong> timer 16 16 4,501 ###### 0.0<br />

SQL*Net more data from clien 3,845 0 1 0 0.0<br />

SQL*Net message to client 9,948 0 0 0 0.0<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 77


13 References<br />

Solaris 8 <str<strong>on</strong>g>Software</str<strong>on</strong>g> Developer Collecti<strong>on</strong>: Multithreaded Programming Guide<br />

http://docs.sun.com/app/docs/doc/806-5257?q=multithreading<br />

This book is very useful for understanding <strong>the</strong> Solaris OS threads implementati<strong>on</strong>.<br />

Developers must read this to take advantage of <strong>the</strong> various features. The book is also<br />

helpful for performance tuning engineers.<br />

Solaris 8 Reference Manual Collecti<strong>on</strong>: mallocctl(3MALLOC) - MT hot memory<br />

allocator<br />

http://docs.sun.com/app/docs/doc/806-0627/6j9vhfn1i?q=mtmalloc&a=view<br />

Details <strong>on</strong> <strong>the</strong> MTmalloc library that is part of <strong>the</strong> standard Solaris OS.<br />

<strong>Sun</strong> Java System (formerly known as iPlanet) Web Server 6.0 <str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g>,<br />

Sizing, and Scaling Guide<br />

http://docs.sun.com/source/816-5690-10/perf6.htm<br />

Solaris Tunable Parameters Reference Manual<br />

http://docs.sun.com/app/docs/doc/816-0607?q=kernel+tuning<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> PSPP (<strong>Platform</strong> Sizing and <str<strong>on</strong>g>Performance</str<strong>on</strong>g> Program) Benchmark Web Site<br />

http://www.siebel.com/crm/performance-benchmark.shtm<br />

This is <strong>the</strong> official repository of certified benchmark results from all hardware vendors.<br />

<strong>Sun</strong> Fire E2900-E25K Servers Benchmarks<br />

http://www.sun.com/servers/midrange/sunfire_e2900/benchmarks.html<br />

Industry leading benchmark results.<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> SupportWeb (<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> product documentati<strong>on</strong>)<br />

http://supportweb.siebel.com/default.asp?lf=support_search.asp&rf=search/seasearch.asp&tf=tsupport_search.asp<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 78


Acknowledgements<br />

Thanks to <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems' Mark Farrier, Francisco Casas, Farinaz Farsai, Vikram Kumar,<br />

Santosh Hasani, Sanjay Agarwal, Harsha Gadagkar, and o<strong>the</strong>rs from <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Systems for<br />

working with <strong>Sun</strong>. Thanks to Scott Anders<strong>on</strong>, George Drapeau, and Lester Dorman for<br />

<strong>the</strong>ir role as management and sp<strong>on</strong>sors from <strong>Sun</strong>. Thanks to Diane Kituara, Sherill Ellis,<br />

and George Drapeau for <strong>the</strong>ir valuable feedback <strong>on</strong> c<strong>on</strong>tent for this paper. Thanks to <strong>Sun</strong><br />

MDE Team Kesari Mandyam, Giri Mandalika, and Devika Gollapudi for <strong>the</strong>ir<br />

c<strong>on</strong>tributi<strong>on</strong>s. Thanks to <strong>the</strong> various engineers across <strong>Sun</strong> Microsystems for providing<br />

subject matter expertise: Ravindra Talashikar (PAE), Stephen Johns<strong>on</strong> (Network<br />

Storage), and Dileep Kumar (<strong>Sun</strong> Java System Web Server).<br />

About <strong>the</strong> Author<br />

Khader Mohiuddin works as a Staff Engineer in <strong>the</strong> Market Development Engineering<br />

organizati<strong>on</strong> at <strong>Sun</strong> Microsystems, Inc. His role as <strong>the</strong> engineering lead for <strong>the</strong> <strong>Sun</strong>-<br />

<str<strong>on</strong>g>Siebel</str<strong>on</strong>g> Alliance involves optimizing <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> CRM applicati<strong>on</strong>s <strong>on</strong> <strong>the</strong> <strong>Sun</strong> platform, joint<br />

technology adopti<strong>on</strong> projects, and so <strong>on</strong>. He has been at <strong>Sun</strong> for four years empowering<br />

ISVs to adopt <strong>Sun</strong>’s latest technologies. Prior to that he worked at Oracle as a developer<br />

and Senior <str<strong>on</strong>g>Performance</str<strong>on</strong>g> Engineer for five years, and at AT&T Bell Labs, New Jersey, for<br />

three years.<br />

About <strong>Sun</strong> Microsystems, Inc.<br />

Since its incepti<strong>on</strong> in 1982, a singular visi<strong>on</strong> — “The Network Is The Computer” — has<br />

propelled <strong>Sun</strong> Microsystems, Inc. (Nasdaq: SUNW) to its positi<strong>on</strong> as a leading provider of<br />

industrial-strength hardware, software, and services that make <strong>the</strong> Net work. <strong>Sun</strong> can be<br />

found in more than 170 countries and <strong>on</strong> <strong>the</strong> World Wide Web at http://sun.com/.<br />

<str<strong>on</strong>g>Performance</str<strong>on</strong>g> <str<strong>on</strong>g>Tuning</str<strong>on</strong>g> <str<strong>on</strong>g>Siebel</str<strong>on</strong>g> <str<strong>on</strong>g>Software</str<strong>on</strong>g> <strong>on</strong> <strong>the</strong> <strong>Sun</strong> <strong>Platform</strong> Page 79

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!