Sizing Guide Exchange Server 2003 - Fujitsu
Sizing Guide Exchange Server 2003 - Fujitsu
Sizing Guide Exchange Server 2003 - Fujitsu
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Sizing</strong> <strong>Guide</strong><br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Abstract<br />
<strong>Sizing</strong> of PRIMERGY systems for Microsoft<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>.<br />
This technical documentation is aimed at the persons<br />
responsible for the sizing of PRIMERGY servers for<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. It is to help during<br />
the presales phase when determining the correct<br />
server model for a requested number of users /<br />
performance class.<br />
In addition to the question, how the required system performance is acquired for a specific number of users,<br />
special emphasis is given to discussing the <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> requirements and the respective<br />
bottlenecks that can arise from this. The options offered by the various PRIMERGY models and their<br />
performance classes are explained with regard to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> and sample configurations are<br />
presented.<br />
Contents<br />
PRIMERGY .............................................................. 2<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> ............................................. 3<br />
What‟s new in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> ................... 4<br />
Preview of <strong>Exchange</strong> 2007 .................................... 5<br />
<strong>Exchange</strong> measurement methodology ..................... 6<br />
User definition ....................................................... 6<br />
Load simulation with LoadSim <strong>2003</strong> ...................... 7<br />
User profiles .......................................................... 8<br />
Evolution of the user profiles ................................. 8<br />
LoadSim 2000 vs. LoadSim <strong>2003</strong> .......................... 9<br />
Benchmark versus reality .................................... 10<br />
System load ......................................................... 11<br />
<strong>Exchange</strong>-relevant resources ................................. 13<br />
<strong>Exchange</strong> architecture ......................................... 13<br />
Active Directory and DNS .................................... 14<br />
Operating system ................................................ 15<br />
Computing performance ...................................... 16<br />
Main memory ....................................................... 16<br />
Disk subsystem ................................................... 18<br />
Transaction principle ........................................ 19<br />
Access pattern .................................................. 20<br />
Caches ............................................................. 20<br />
RAID levels ....................................................... 21<br />
Data throughput ................................................ 23<br />
Hard disks ........................................................... 24<br />
Storage space .................................................. 26<br />
Network ............................................................... 27<br />
High availability ................................................... 27<br />
Version 4.2<br />
Juli 2006<br />
Pages 69<br />
Backup ................................................................. 28<br />
Backup solutions for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> ....... 32<br />
Archiving .............................................................. 36<br />
Virus protection .................................................... 37<br />
System analysis tools ............................................. 38<br />
Performance analysis .......................................... 40<br />
PRIMERGY as <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> .................. 44<br />
PRIMERGY Econel 100 ....................................... 46<br />
PRIMERGY Econel 200 ....................................... 48<br />
PRIMERGY TX150 S4 ......................................... 49<br />
PRIMERGY TX200 S3 ......................................... 51<br />
PRIMERGY RX100 S3 ........................................ 53<br />
PRIMERGY RX200 S3 ........................................ 54<br />
PRIMERGY RX220.............................................. 56<br />
PRIMERGY RX300 S3 / TX300 S3 ..................... 58<br />
PRIMERGY BX600 .............................................. 60<br />
PRIMERGY BX620 S3...................................... 60<br />
PRIMERGY BX630 ........................................... 60<br />
PRIMERGY RX600 S3 / TX600 S3 ..................... 63<br />
PRIMERGY RX800 S2 ........................................ 66<br />
PRIMERGY RXI300 / RXI600 .............................. 66<br />
Summary ............................................................. 67<br />
References ............................................................. 68<br />
Document History ................................................... 69<br />
Contacts ................................................................. 69
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY<br />
The following definition is aimed at all readers, for whom the<br />
name PRIMERGY has no meaning and serves as a short<br />
introduction: Since 1995 PRIMERGY <strong>Server</strong>s has been the trade<br />
name for a very successful server family from <strong>Fujitsu</strong>. It is a<br />
product line that has been developed and produced by <strong>Fujitsu</strong><br />
with systems for small work groups and solutions for large-scale<br />
companies.<br />
Scalability, Flexibility & Expandability<br />
The latest technologies are used in the PRIMERGY<br />
family from small mono processor systems through to<br />
systems with 16 processors. Intel or AMD processors of<br />
the highest performance class form the heart of these<br />
systems. Multiple 64-bit PCI I/O and memory busses,<br />
fast RAM and high-performance components, such as<br />
SCSI technology and Fibre Channel products, ensure<br />
high data throughput. This means full performance,<br />
regardless of whether it is for scaling-out or scaling-up.<br />
With the scaling-out method, similar to an ant colony<br />
where an enhanced performance is provided by means of a multitude of individuals, the Blade <strong>Server</strong>s and<br />
compact Compu Node systems can be used ideally. The scale-up method, i.e. the upgrading of an existing<br />
system, is ensured by the extensive upgrade options of the PRIMERGY systems to up to 16 processors and<br />
main memory of 256 GB. PCI and PCI-X slots provide the required expansion option for I/O components.<br />
Long-term planning in close cooperation with renowned component suppliers, such as Intel, AMD, LSI,<br />
<strong>Server</strong>Works, ensures continuous and optimal compatibility from one server generation to the next.<br />
PRIMERGY planning reaches out two years into the future and guarantees early as possible integration of<br />
the latest technologies.<br />
Reliability & Availability<br />
In addition to performance, emphasis is also placed on quality. This not only includes an excellent<br />
processing quality and the use of high-quality individual components, but also fail-safe, early error<br />
diagnostics and data protection features. Important system components are designed on a redundant basis<br />
and their functionality is monitored by the system. Many parts can be replaced trouble-free during operation,<br />
thus enabling downtimes to be kept to a minimum and guaranteeing availability.<br />
Security<br />
Your data are of the greatest importance to PRIMERGY. Protection against data loss is provided by the highperformance<br />
disk sub-systems of the PRIMERGY and FibreCAT product family. Even higher, largest<br />
possible availability rates are provided by PRIMERGY cluster configurations, in which not only the servers<br />
but also the disk subsystems and the entire cabling are redundant in design.<br />
Manageability<br />
Comprehensive management software for all phases of the server lifecycle ensures smooth operation and<br />
simplifies the maintenance and error diagnostics of the PRIMERGY.<br />
.<br />
<strong>Server</strong>Start is a user-friendly, menu-based software package for the optimum installation<br />
and configuration of the system with automatic hardware detection and installation of all<br />
required drivers.<br />
<strong>Server</strong>View is used for server monitoring and provides alarm, threshold, report and basis<br />
management, pre-failure detection and analyzing, alarm service and version management.<br />
RemoteView permits independent remote maintenance and diagnostics of the hardware<br />
and operating system through LAN or modem.<br />
Further detailed information about the PRIMERGY systems is available in the Internet at<br />
http://www.primergy.com.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 2 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
This chapter with its brief summary is aimed at those readers, who have not yet gained any experience with<br />
a Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> product. Unfortunately, only the most important functions can be<br />
mentioned. Explaining all of the features of the Microsoft <strong>Exchange</strong> <strong>Server</strong> would exceed the framework of<br />
this white paper and its actual subject matter.<br />
The Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is a modern client-server solution which is used for messaging and<br />
workgroup computing. <strong>Exchange</strong> enables secure access to mail boxes, mail storage and address books. In<br />
addition to the transmission of electronic mail, this platform provides convenient appointment/calendar<br />
functions within an organization or work group, publication of information in public folders and web storage,<br />
electronic forms as well as user-defined applications for workflow automation.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is completely integrated into the Windows Active Directory and supports a<br />
hierarchical topology. This permits a multitude of <strong>Exchange</strong> servers, grouped according to location, to be<br />
operated jointly on a worldwide basis within an organization. Administration can be performed on a central<br />
and cross-location basis. This decentralized concept increases the performance and availability of Microsoft<br />
<strong>Exchange</strong> for use as a messaging system within the company and enables outstanding scalability.<br />
On the one hand <strong>Exchange</strong> guarantees data security through its complete integration into the Windows<br />
security mechanism, and on the other hand it provides additional mechanisms, such as digital signatures and<br />
e-mail de/encryption.<br />
The high measure of reliability that already is offered by a single <strong>Exchange</strong> server is greatly enhanced<br />
through the support of Microsoft Clustering Service, which is included in the Windows <strong>Server</strong> <strong>2003</strong><br />
Enterprise Edition and Windows <strong>Server</strong> <strong>2003</strong> Datacenter Edition. This enables the realization of clusters with<br />
two nodes up to eight nodes.<br />
By means of so-called connectors Microsoft <strong>Exchange</strong> servers can be linked to worldwide e-mail services,<br />
such as Internet and X.400. Similarly, inter-functionality is also possible with other mail systems, such as<br />
Lotus Notes, PROFS and SNADS. Furthermore, third-party suppliers now offer numerous gateways, which<br />
integrate further services into the <strong>Exchange</strong> server, such as FAX, telephone connections for call-center<br />
solutions, voicemail, etc.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> provides numerous standard protocols for communication, such as Post<br />
Office Protocol Version 3 (POP3), Simple Mail Transfer Protocol (SMTP), Lightweight Directory Access<br />
Protocol (LDAP), Internet Message Access Protocol Version 4 (IMAP4), Network News Transfer Protocol<br />
(NNTP), and Hypertext Transfer Protocol (HTTP), with which <strong>Exchange</strong> can be integrated into<br />
heterogeneous network and heterogeneous client environments. This guarantees location-independent<br />
access to the information administered by the <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, regardless of whether the device is a<br />
desktop PC (irrespective of the operating system), Personal Digital Assistants (PDA) or mobile telephone.<br />
The Microsoft <strong>Exchange</strong> <strong>2003</strong> <strong>Server</strong> is available in two configurations:<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Standard Edition<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Enterprise Edition<br />
Platform for small and medium-sized<br />
companies<br />
Platform for medium to very large,<br />
globally active companies with the<br />
highest requirements concerning<br />
reliability and scalability.<br />
Max. 2 databases<br />
Max. 16 GB per database<br />
(with Service Pack 2 up to 75 GB)<br />
Max. 20 databases<br />
Max. 16 TB per database<br />
Cluster Support<br />
X.400 Connector<br />
The <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Standard Edition is ... also 10 ...<br />
available as part of the bundle »Windows Small<br />
Business <strong>Server</strong> <strong>2003</strong>«. Windows Small Business <strong>Server</strong> (SBS) is designed as a complete package to meet<br />
the requirements of small and medium enterprises with up to 75 client workplaces.<br />
Additional information concerning the functionality of Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is available in the<br />
Internet at www.microsoft.com/exchange and www.microsoft.com/windowsserver<strong>2003</strong>/sbs.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 3 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
What’s new in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Microsoft <strong>Exchange</strong> <strong>Server</strong>‟s ten year history goes back to 1996.<br />
The first version of <strong>Exchange</strong> was version number 4.0 because it<br />
replaced the predecessor product, MS Mail 3.2. However,<br />
<strong>Exchange</strong> has as regards architecture nothing in common with MS<br />
Mail, except that both applications are basically used to exchange<br />
e-mails. Today, ten years after the market launch, <strong>Exchange</strong> bears<br />
the product name <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> and the internal version<br />
number 6.5.<br />
After three years <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is the successor to <strong>Exchange</strong> 2000 <strong>Server</strong>. Based on the product<br />
name and the period of time that lies between the two product versions, it could be assumed that <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> has revolutionary changes to offer, but as can be seen from the internal version code, it is a socalled<br />
point release. In comparison with the changes between <strong>Exchange</strong> 5.5 and <strong>Exchange</strong> 2000 <strong>Server</strong>, the<br />
changes are less revolutionary and more of an evolutionary nature.<br />
Compared with its predecessor version <strong>Exchange</strong> 5.5, <strong>Exchange</strong> 2000 <strong>Server</strong> involved a completely new<br />
concept for database organization and user data administration, which had consequences up to and<br />
including the domain structure of the Windows network so that migration from <strong>Exchange</strong> 5.5 to <strong>Exchange</strong><br />
2000 <strong>Server</strong> was necessary. The consequence is that many <strong>Exchange</strong> users have shied away from these<br />
migration costs and are migrating directly from <strong>Exchange</strong> 5.5 to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. On the other hand,<br />
the migration from <strong>Exchange</strong> 2000 <strong>Server</strong> to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is proving to be considerably less<br />
problematic and is the equivalent of a simple update.<br />
Nevertheless, the new functions in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> as opposed to <strong>Exchange</strong> 2000 <strong>Server</strong> are not to<br />
be despised. And also with the Service Pack 1 (SP1) and Service Pack 2 (SP2) for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
not only are problems fixed, they offer a lot of new and improved features, like mobile e-mail access with<br />
direct push technology and improved SPAM filtering methods with »Intelligent Message Filter« (IMF). The<br />
white paper entitled What's new in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> [L7] from Microsoft describes on more than 200<br />
pages the new features of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, including SP2. The bulk of the new functions refer to<br />
security, manageability, mobility and client-side extensions, as provided by the new standard client for<br />
<strong>Exchange</strong> - Outlook <strong>2003</strong> and the revised OWA (Outlook Web Access). In the following we intend to<br />
concentrate on a number of outstanding innovations that have an impact on the hardware basis of an<br />
<strong>Exchange</strong> server.<br />
Shorter backup and restore times<br />
The Volume Shadow Copy Service (referred to in short as VSS) is in fact a new function of Windows<br />
<strong>Server</strong> <strong>2003</strong> and enables the creation of snapshot backups. <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is compatible with<br />
this new VSS function. Thus making it possible to take backups of the <strong>Exchange</strong> databases in a very<br />
short time and at the same time drastically reduce backup and restore times, which turn out to be the<br />
limiting factor in large <strong>Exchange</strong> servers. In the following chapters Consolidation and Backup this<br />
functionality is addressed in more detail.<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> offers for the first time a simple method of restoring individual mailboxes. To this<br />
end, there is a separate Recovery Storage Group, which permits the restore of individual mailboxes or<br />
individual databases during operation.<br />
Extended clustering<br />
In contrast to <strong>Exchange</strong> 2000 <strong>Server</strong>, which<br />
supported a cluster with two nodes on the basis<br />
of Windows 2000 Advanced <strong>Server</strong> and four<br />
nodes under Windows 2000 DataCenter <strong>Server</strong>,<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> permits a cluster with up<br />
to eight nodes to already be implemented on<br />
Windows <strong>Server</strong> <strong>2003</strong> Enterprise Edition.<br />
<strong>Exchange</strong> 4.0 v 4.0 Apr. 1996<br />
<strong>Exchange</strong> 5.0 v 5.0 Mar. 1997<br />
<strong>Exchange</strong> 5.5 v 5.5 Nov. 1997<br />
<strong>Exchange</strong> 2000 v 6.0 Oct. 2000<br />
<strong>Exchange</strong> <strong>2003</strong> v 6.5 Oct. <strong>2003</strong><br />
<strong>Exchange</strong> <strong>2003</strong> SP1 v 6.5 May 2004<br />
<strong>Exchange</strong> <strong>2003</strong> SP2 v 6.5 Oct. 2005<br />
Number of cluster nodes<br />
<strong>Exchange</strong> 2000<br />
Enterprise <strong>Server</strong><br />
<strong>Exchange</strong> <strong>2003</strong><br />
Enterprise Edition<br />
Windows 2000<br />
<strong>Server</strong> - -<br />
Advanced <strong>Server</strong> 2 2<br />
DataCenter <strong>Server</strong> 4 4<br />
Windows <strong>2003</strong><br />
Standard Edition - -<br />
Enterprise Edition - 8<br />
Datacenter Edition - 8<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 4 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Reduced network load<br />
With the new functionality of client-side caching, the current Outlook <strong>2003</strong> version of the standard<br />
<strong>Exchange</strong> client makes an important contribution toward reducing network load. Particularly with clients<br />
connected through a low-capacity WAN, this greatly relieves the load on the network and also on the<br />
server. Whereas all previous Outlook versions submitted a request to the <strong>Exchange</strong> server for every<br />
object, the <strong>Exchange</strong> server is now only bothered in this way for first access. Afterwards data<br />
compression is used for communication between Outlook <strong>2003</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>.<br />
Communication of the <strong>Exchange</strong> servers with each other has also been optimized. For example, for the<br />
replication of public folders a least-cost calculation is now always taken as a basis, and the accesses of<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> to the Active Directory are reduced through better caching algorithms by up to<br />
60%.<br />
It should be noted that many of the new functions of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> are only available if <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> is used in conjunction with Windows <strong>Server</strong> <strong>2003</strong> and Outlook <strong>2003</strong>. For more details see the<br />
white paper entitled What's new in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> [L7].<br />
Preview of <strong>Exchange</strong> 2007<br />
After looking back on the history of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, we would now also like to briefly look to the<br />
future of <strong>Exchange</strong> <strong>Server</strong>. The next version of <strong>Exchange</strong> will be called »<strong>Exchange</strong> <strong>Server</strong> 2007« and, as the<br />
name already suggests, will appear in 2007.<br />
The outstanding performance-relevant change will be the changeover from the 32-bit to the 64-bit version.<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is a 32-bit application and runs solely on the 32-bit version of Windows. Since<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> does not actively use PAE, <strong>Exchange</strong> is limited to an address space of 3 GB. A<br />
configuration of an <strong>Exchange</strong> <strong>Server</strong> with more than 4 GB of main memory does not entail a gain in<br />
performance. On the other hand, <strong>Exchange</strong> »lives« from the cache for the database (see chapter Main<br />
memory). This limitation will be overcome with <strong>Exchange</strong> <strong>Server</strong> 2007. <strong>Exchange</strong> <strong>Server</strong> 2007 will only be<br />
available as a 64-bit version for x64 architectures (Intel CPUs with EMT64 and AMD Opteron); a version for<br />
the IA64 architecture (Itanium) is not planned.<br />
As the table opposite illustrates, the<br />
usage of the physical memory is<br />
considerably improved with 64 bit.<br />
With sufficient memory, <strong>Exchange</strong><br />
<strong>Server</strong> 2007 will benefit particularly<br />
from a larger database cache. This<br />
reduces the number of disk accesses<br />
and thus the requirements made of<br />
the disk subsystem, which in today‟s<br />
Physical<br />
memory<br />
<strong>Exchange</strong><br />
address space<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 Standard Edition 4 GB 3 GB<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 Enterprise Edition 64 GB 3 GB<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 Datacenter Edition 128 GB 3 GB<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 x64 Standard Edition 32 GB 8 TB<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 x64 Enterprise Edition 1 TB 8 TB<br />
Windows <strong>Server</strong> <strong>2003</strong> R2 x64 Datacenter Edition 1 TB 8 TB<br />
<strong>Exchange</strong> configurations frequently determines the performance of the entire system. As a result, it is<br />
possible with <strong>Exchange</strong> <strong>Server</strong> 2007 to either organize the disk subsystem with an unchanged number of<br />
users in a more cost-effective way or manage a larger number of users.<br />
In addition, <strong>Exchange</strong> <strong>Server</strong> 2007 entails a number of new or extended functionalities, the listing of which<br />
would go beyond the scope of this paper. More details about <strong>Exchange</strong> <strong>Server</strong> 2007 can be found on the<br />
Microsoft web page http://www.microsoft.com/exchange/preview/default.mspx.<br />
At the same time with <strong>Exchange</strong> <strong>Server</strong> 2007, a revised version of Outlook will also appear on the client side.<br />
The web page http://www.microsoft.com/office/preview/programs/outlook/overview.mspx gives an overview<br />
of Microsoft Office Outlook 2007.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 5 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
<strong>Exchange</strong> measurement methodology<br />
The following question is one that is repeatedly asked concerning the sizing of a server: »Which PRIMERGY<br />
do I require for N <strong>Exchange</strong> users?« or »How many <strong>Exchange</strong> users can a specific PRIMERGY model<br />
serve?«<br />
This question is particularly asked by customers and sales staff who are of course looking for the best<br />
possible system. Not under-sized so that the performance is right, but also not over-sized so that the price is<br />
right.<br />
In addition, service engineers also ask: »How can I configure the system to enable the best possible<br />
performance to be achieved from existing hardware?« Because, for example, it is decisive just how the hard<br />
disks are organized in RAID arrays.<br />
Unfortunately, the answers cannot be listed in a concise table with the number of users in the one column<br />
and the ideal PRIMERGY system or its configuration in the other column, even if many would like to see this<br />
and several competitors even suggest this. Why the answers to this, what might appear, simple question are<br />
not so trivial and how a suitable PRIMERGY system can still be chosen on the basis of the number of users<br />
is indicated below.<br />
User definition<br />
The most difficult point in the apparently simple question »Which PRIMERGY do I require for N users?«, is<br />
the user himself. One possible answer to the question »What is a user?« could be: a person who uses<br />
<strong>Exchange</strong>, i.e. who sends e-mails. Is that all? No! He of course also reads e-mails... and the filing option<br />
provided by <strong>Exchange</strong> is also used ... and addresses are managed… and the calendar is used. And just how<br />
intensively does the user perform this type of task? In relation to the daily requirements …<br />
In addition to the question, what is user behavior like with regard to the number and size of the mail, the<br />
question which is increasingly being asked now concerns the method of access: »How does the user gain<br />
access to the mail system?« A few years ago it would have been normal for really homogeneous<br />
infrastructures to at least be found within an organization unit, and practically all employees would have<br />
uniformly worked with Outlook. Due to the increasing mobility of the end user and the growing diversity of<br />
mail-compatible systems, the multitude of protocols and access mechanisms is on the increase, e.g. Outlook<br />
(MAPI), Internet protocols (POP3, IMAP4, LDAP), Web-based (HTTP) or mobile devices (WAP), just to list<br />
the most important ones.<br />
One could now assume that this would not have a direct influence on the number of users operating an<br />
<strong>Exchange</strong> server because a user uses only the one protocol or the other one at any given time, and in the<br />
end is only one user. This is however not the case because the mail protocols differ in the method of<br />
communication used. For example, the one type processes mails as a whole and the other type on an<br />
object-orientated basis (sender, recipient, subject, ...).<br />
This different type of access pattern leads to the mail from the <strong>Exchange</strong> server having to be converted into<br />
the format required. As the information is only stored in one format in the information store of the <strong>Exchange</strong><br />
server. The service load caused by the conversion is in part not insignificant. With <strong>Exchange</strong> 2000 <strong>Server</strong><br />
and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> and in contrast to <strong>Exchange</strong> 5.5, Microsoft has already taken several measures<br />
to reduce the load through the conversion of mail formats and protocols. For example, a special streaming<br />
cache is set up for Internet-based access, which relieves the load on the database optimized for MAPI-based<br />
accesses.<br />
We see our greatest problem in the question: How do we integrate the human factor? There isn‟t just a user.<br />
Just as there are small, big, short and thin people, there are users who use the medium electronic mail more<br />
or less intensively. This depends not least on the respective task of the user. Whereas the one user may only<br />
send very few short text mails per day, another user may send mails with larger attachments of several MB.<br />
One user may read a mail and delete it straight away, whereas another will collect his mails in archives,<br />
which naturally results in a completely different load being placed on the <strong>Exchange</strong> server.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 6 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
We have now realized that one user is not the same as another user. But even if we do know the user<br />
profile, the question still arises: How many users of a specific performance class can a server system serve.<br />
Due to the manifold influences an exact answer to this can only be provided by a test performed in a real<br />
environment, but this is of course impossible. However, very good performance statements can be obtained<br />
through simulations.<br />
Load simulation with LoadSim <strong>2003</strong><br />
How do we determine the number of users who can be served by a server? By trial and error. This of course<br />
cannot be done using real users, the users are simulated with the aid of computers, so-called load<br />
generators and special software. The load simulator used by the Microsoft <strong>Exchange</strong> server is LoadSim.<br />
The Microsoft <strong>Exchange</strong> server load simulator LoadSim<br />
<strong>2003</strong> (internal version number 6.5) is a tool which enables<br />
numerous mail users to be simulated. LoadSim can be<br />
used to determine the efficiency of the Microsoft<br />
<strong>Exchange</strong> server under different loads. The behavior of<br />
the server can be studied with the aid of load profiles<br />
which are freely definable. During this process, the load<br />
simulator LoadSim will determine a so-called score. This<br />
is the mean response time a user has to wait for his job to<br />
be acknowledged. In this case, a response time of 0.2<br />
seconds is regarded as an excellent value because this is<br />
equivalent to the natural reaction time of a person. A value<br />
of below a second is generally viewed as acceptable. The<br />
results can then be used to determine the optimum<br />
number of users per server, to analyze performance<br />
bottlenecks and to evaluate the efficiency of a specific<br />
<strong>Exchange</strong> <strong>Server</strong><br />
to be measured<br />
Network<br />
Load generators<br />
Controller<br />
Active<br />
Directory<br />
hardware configuration. To obtain your own performance analysis, use the <strong>Exchange</strong> load simulator<br />
LoadSim <strong>2003</strong> available at the Microsoft Web Page Downloads for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> [L9].<br />
However, load simulation is also problematic. This is because it is only as good as the load profile which has<br />
been defined. Only when the load profile coincides with reality, or is at least very close to it, will the results of<br />
the load simulation correlate with the load in real operation. If the customer knows his load profile, then the<br />
performance of an <strong>Exchange</strong> solution can be evaluated very exactly during a customer-specific simulation.<br />
Unfortunately, the load profile is only known exactly in the most seldom cases. Although the use of this<br />
method provides precise performance information for a selected customer, general performance statements<br />
cannot be inferred.<br />
In order for performance measurements to be performed which are valid on a general basis, several things<br />
must be unified. On the one hand, a standard user profile must be determined, and on the other hand the<br />
<strong>Exchange</strong> environment must be idealized. Both assumptions must cover as large a bandwidth of the real<br />
scenario as possible.<br />
With the aid of load simulation it is now possible to determine the influence of specific key components, such<br />
as CPU performance, main memory and disk system on the performance of the overall system. A set of<br />
basic rules can then be derived from this which must be observed when sizing an <strong>Exchange</strong> server.<br />
Additionally, using a standard load profile, the various models of the PRIMERGY system family can then be<br />
measured according to a uniform method in order for them to be graded into specific performance classes.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 7 (69)<br />
•••
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
User profiles<br />
The simulation tool LoadSim <strong>2003</strong> for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> enables the creation of any type of user<br />
profiles. On the other hand, LoadSim also already provides several standard profiles. According to Microsoft,<br />
these profiles were created from analyses of existing <strong>Exchange</strong> installations. As it was obviously difficult to<br />
determine a standard user in this case, Microsoft has defined three profiles - medium, heavy and cachedmode<br />
user as well as an additional fourth profile, MMB3, for pure benchmark purposes.<br />
All four predefined load profiles of<br />
LoadSim <strong>2003</strong> use the same mix<br />
of mails with an average mail size<br />
of 76.8 kB. The profiles differ in the<br />
number of mails in the mailbox and<br />
in the number of mails sent per<br />
day, as shown in the table opposite.<br />
Mail traffic in MB per day 13 20 15 16<br />
All other considerations and sizing<br />
measurements in this white paper Mailbox size in MB 60 112 93 100<br />
are based on the medium profile,<br />
which should reflect most application scenarios. The heavy profile with far more than 200 mail activities per<br />
user and day should hardly reflect an average user‟s real activities. The cached-mode profile was especially<br />
developed for the simulation of the new cache mode in Outlook <strong>2003</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>.<br />
Unfortunately, the mail traffic generated is not comparable with any other standard profile of LoadSim <strong>2003</strong><br />
so it cannot be used for the comparison between cached-mode and classic Outlook. The profile MMB3 is<br />
solely suited for benchmark purposes, as illustrated in the chapter Benchmark versus Reality.<br />
Evolution of the user profiles<br />
The load simulation tool LoadSim for MAPI-based<br />
accesses has been available since the first version of<br />
<strong>Exchange</strong>. During this period of ten years it was necessary<br />
to change the load profile to meet the user behavior that<br />
had changed over the years and the growing functionalities<br />
of <strong>Exchange</strong> and Outlook. The load profiles need to be<br />
redefined approximately every three years so that at<br />
present LoadSim <strong>2003</strong> represents the third generation of<br />
load profiles for <strong>Exchange</strong>.<br />
Mail Attachment Weighting in LoadSim<br />
Size 5.5 2000 <strong>2003</strong><br />
4 kB - 60% 41% 15%<br />
5 kB - 13% 18% 18%<br />
6 kB - 5% 14% 16%<br />
10 kB Excel Object 5% - -<br />
14 kB Bitmap Object 2% 10% 5%<br />
14 kB Text File 5% - -<br />
18 kB Excel Spreadsheet 2% 7% 17%<br />
19 kB Word Document 8% 7% 20%<br />
107 kB PowerPoint Presentation - 1% 5%<br />
1 MB PowerPoint Presentation - 1% 2%<br />
2 MB Word Document - 1% 2%<br />
Average mail size [kB] 5.7 39.2 76.8<br />
Activity Profile<br />
Medium Heavy<br />
Cached<br />
Mode<br />
New mails per work day 10 12 7 8<br />
The table opposite shows how mail volume<br />
has over the years shifted toward larger mails<br />
with attachments. Between 1997 and 2000<br />
the size of a mail approximately increased sixfold.<br />
Fortunately, this trend has not persisted<br />
and during the last three years the average<br />
mail size has only doubled. However, this is<br />
also due to the fact that many mail-server<br />
operators restrict the permissible size of emails.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 8 (69)<br />
MMB3<br />
Reply, Reply All, Forward 20 40 37 56<br />
Average recipients per Mail 4.8 4.0 3.7 2.4<br />
Received mails 141 208 162 152<br />
<strong>Exchange</strong><br />
Version<br />
Year of<br />
Publication<br />
<strong>Exchange</strong> 4.0 1996<br />
<strong>Exchange</strong> 5.0 1997<br />
<strong>Exchange</strong> 5.5 1997<br />
LoadSim<br />
Version<br />
LoadSim 4.x<br />
<strong>Exchange</strong> 2000 <strong>2003</strong> LoadSim 2000<br />
<strong>Exchange</strong> <strong>2003</strong> <strong>2003</strong> LoadSim <strong>2003</strong>
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Not only has the average mail size<br />
grown over the years, so has the<br />
number of mails sent. Consequently,<br />
the average mailbox size also is<br />
growing. Since most mailbox<br />
operators also intervene here in a<br />
restrictive capacity and set mailbox<br />
limits of typically 100 MB, the average<br />
mailbox size is at present<br />
approximately 60 MB.<br />
In addition to the growing mail volume, LoadSim also<br />
does justice to changing user behavior by including<br />
and simulating various user actions in the load profile.<br />
In this way, the simulation of accesses to the calendar<br />
and contacts were added in LoadSim 2000. The<br />
simulation of smart folders and rules were included in<br />
the load profile in LoadSim <strong>2003</strong>.<br />
LoadSim 2000 vs. LoadSim <strong>2003</strong><br />
In order to do justice to present user behavior we also use the current version of LoadSim <strong>2003</strong> with the<br />
medium user profile for all sizing measurements in this paper.<br />
Although the comparability with the previous version 3.x of this sizing guide suffers, it would nevertheless not<br />
be justified to make statements as to the sizing of an <strong>Exchange</strong> server based on a no longer up-to-date load<br />
profile. However, in order to obtain an impression of which load differences result on an <strong>Exchange</strong> server<br />
due to a different load profile, an identical system configuration was measured both with the load profile of<br />
LoadSim 2000, which was<br />
based on <strong>Sizing</strong> <strong>Guide</strong> 3.x,<br />
and with the current LoadSim<br />
<strong>2003</strong> profile, which is based<br />
on this <strong>Sizing</strong> <strong>Guide</strong> 4.x.<br />
The diagram opposite shows<br />
the percentage changes of<br />
significant performance data<br />
with the measurement results<br />
of LoadSim 2000 taken as the<br />
100% reference basis.<br />
Activity LoadSim<br />
5.5 2000 <strong>2003</strong><br />
New mails per work day 4 7 10<br />
Mails with attachment 22% 27% 51%<br />
Average recipients per mail 4.68 3.67 4.78<br />
Average mail size [kB] 5.72 39.16 76.81<br />
Daily mail volume [MB] 0.45 7.88 12.82<br />
Mailbox size [MB] 5 26 60<br />
Activity LoadSim<br />
5.5 2000 <strong>2003</strong><br />
Send Mail x x x<br />
Process Inbox x x x<br />
Browse Mail x x x<br />
Free/Busy x x x<br />
Request Meetings x x<br />
Make Appointments x x<br />
Browse Calendar x x<br />
Journal Applications x<br />
Logon/off x<br />
Smart Folders x<br />
Rules x<br />
Browse Contacts x<br />
Create Contact x<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 9 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Benchmark versus reality<br />
If this type of load simulation is performed under standard conditions, the term used for this is a benchmark.<br />
The focus of a benchmark is usually to determine the largest possible number of »users per server«. The<br />
advantage of such standardized conditions is that it then is possible for a comparison to be made on a<br />
system and cross-manufacturer basis. The disadvantage is that, from a benchmark point-of-view, each<br />
manufacturer attempts to obtain the optimum from his system in order to do well in cross-manufacturer<br />
comparisons. This leads to all other functions, which are normally required by a system but are not a<br />
mandatory requirement of the benchmark rule, being disregarded or even consciously deactivated.<br />
Functions, such as backup, virus protection and other server functions, e.g. like the classical file and print<br />
function or growth options are then typically fully disregarded. Even the functions which provide a system‟s<br />
fail-safe security e.g. data protection through RAID 1 or RAID 5, remain disregarded.<br />
This increasingly results in confusion if the efficiency of an <strong>Exchange</strong> server is advertised using benchmarks.<br />
Therefore, <strong>Fujitsu</strong> has in contrast to many a competitor always consciously distinguished between<br />
benchmark and sizing measurements.<br />
Thus with the MAPI Messaging Benchmark MMB2 under <strong>Exchange</strong> 2000 <strong>Server</strong> more than 16,000 users<br />
were achieved with one server. In reality, however, such a high number of users cannot be achieved; on the<br />
contrary, the user numbers fall approximately four-fold short of this mark. With the successor, MMB3 for<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, Microsoft has attempted to develop a load profile to ascertain lower, more realistic<br />
user numbers. But with actual server hardware and a disk subsystem which is oversized compared to a real<br />
environment, it is possible to achieve about 13,500 MMB3 users. These numbers are likewise about three<br />
times higher than the number of users a server can host in real operation.<br />
A cross-manufacturer collection of results of the MAPI Messaging Benchmark (MMB) is maintained by<br />
Microsoft in a list with MMB3 Results [L8].<br />
Nevertheless, benchmarks are an important aid for determining the operating efficiency of computer<br />
systems, providing that the benchmark results are interpreted correctly. Above all a benchmark must not be<br />
mistaken for a performance measurement or with a real application. Hence the listing below of the most<br />
important differences or features in an overview:<br />
Benchmark Optimized to maximum performance, due to a cross-manufacturer<br />
comparison.<br />
Performance measurement The measurement of several systems, which are not by all means<br />
trimmed to high performance, but have been upgraded in a realitycompatible<br />
and simplified scenario for the purpose of comparison with<br />
each other.<br />
Real application Real scenarios with several services on a server; with peak loads and<br />
exceptional situations that are to be overcome.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 10 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
System load<br />
When is an <strong>Exchange</strong> server working at full capacity? With the Windows <strong>Server</strong> <strong>2003</strong> performance monitor<br />
numerous counters provide a very detailed analysis of the system. Also the <strong>Exchange</strong> <strong>Server</strong> provides<br />
additional counters.<br />
The most important counters, which are used to read off the behavior of the system, are:<br />
• The counter »Processor / % Processor Time« describes the average CPU load. The MMB3<br />
benchmark rules specify that the value may not be greater than 90%. For a productive system, however,<br />
this is clearly too high. Depending on the source, there are recommendations saying that the average<br />
CPU load should not be permanently above 70% - 80%. In all our simulations to size the PRIMERGY as<br />
an <strong>Exchange</strong> server we set ourselves a limit of 30% so that there still remains sufficient CPU<br />
performance - in addition to <strong>Exchange</strong> - for other tasks such as the virus check, backup, etc.<br />
• The counter »System / Processor Queue Length« indicates how many threads are waiting to be<br />
processed by the CPU. This counter should not be larger than the number of logical processors for a<br />
longer period of time.<br />
• The counter »Logical Disk / Average Disk Queue Length« provides information about the disk<br />
subsystem. Over a lengthy measuring period, this counter should not be larger than the number of<br />
physical disks, from which the logical drive is made up.<br />
• The <strong>Exchange</strong>-specific counter »MS<strong>Exchange</strong>IS Mailbox / Send Queue Size« counts the <strong>Exchange</strong><br />
objects that are waiting to be forwarded. The destination can either be a local database or another mail<br />
server. The send queue should always be below 500, not grow continuously over a lengthy period of<br />
time and now and again reach a value close to zero.<br />
• During the simulation run the simulation tool LoadSim determines the processing time of all transactions<br />
in milliseconds (ms) and calculates from this a so-called 95% score. This is the maximum time that 95%<br />
of all transactions have required. In other words, there are transactions that took longer, but 95% of the<br />
transactions implemented need less time than that specified by the score.<br />
• The MMB3 rules & regulations stipulate that the score should be < 1000 ms. We consider a response<br />
time of 1s to be unacceptable for a productive system. For example, when scrolling through the mail<br />
inbox this would mean having to wait a second for every new entry. Therefore, for the measurements<br />
which constitute the basis for this paper we have set a maximum score of 200 ms. This is equivalent to a<br />
typical human reaction time.<br />
• The LoadSim-specific counter »LoadSim Action: Latency« is the weighted average of the client<br />
response time. According to MMB3 rules & regulations this counter should also be less than 1000 ms.<br />
Analog to the score we have also reduced this value to 200 ms.<br />
In addition, there are further performance counters that provide information about the »health« of an<br />
<strong>Exchange</strong> server and should not continuously increase during a simulation run:<br />
• SMTP <strong>Server</strong>: Categorizer Queue Length<br />
• SMTP <strong>Server</strong>: Local Queue Length<br />
• SMTP <strong>Server</strong>: Remote Queue Length<br />
• LoadSim Global: Task Queue Length<br />
For more information about the performance monitor and other tools for the analysis and monitoring of<br />
<strong>Exchange</strong> <strong>Server</strong>s see chapter System analysis tools<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 11 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Let us now summarize this chapter in a few clear-cut sentences:<br />
• An exact performance statement can only be realized by performing a customer-specific load simulation<br />
with a customer-specific user profile.<br />
• Through idealized load simulations it is possible to derive series of rules for server sizing, which are then<br />
used as an aid to good planning. These rules should not be mistaken for a formula. It is still necessary to<br />
interpret the rules, meaning that the basis on which the rules are founded must reflect the reality. In<br />
order for this to be translated into a real project, it will be necessary to determine the requirements of the<br />
real users and to place these in relation to the standardized medium user used in this paper.<br />
• Benchmark measurements should not be confused or equated with performance measurements.<br />
Benchmarks are optimized for maximum performance. With performance measurements, such as the<br />
ones on which this paper is based, the systems analyzed are configured for real operation.<br />
The rules shown below are based on measurements performed in the PRIMERGY Performance Lab with the<br />
load simulation LoadSim <strong>2003</strong> and the medium user profile.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 12 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
<strong>Exchange</strong>-relevant resources<br />
After having explained how to measure an <strong>Exchange</strong> server in the previous chapter, we now wish to analyze<br />
the performance-critical components of an <strong>Exchange</strong> server, before we then discuss the individual<br />
PRIMERGY models with regard to their suitability and performance as an <strong>Exchange</strong> server in the following<br />
chapter.<br />
<strong>Exchange</strong> architecture<br />
At this juncture, we are will indicate several important points which should be taken into consideration when<br />
designing an <strong>Exchange</strong> environment. This chapter is especially intended for the reader who is not<br />
conversant with the architecture of <strong>Exchange</strong> environments.<br />
Decentrally distributed<br />
<strong>Exchange</strong> server is designed for a decentralized<br />
net containing many servers. In<br />
other words, a company, for example, with<br />
200,000 employees would not install one<br />
<strong>Exchange</strong> server for all of its employees, but<br />
would install 40, 50 or even 100 <strong>Exchange</strong> servers<br />
which reflect the organizational and geographical<br />
structure of the company. <strong>Exchange</strong> can by all means still<br />
be centrally and efficiently administered. The complete undertaking<br />
should be structured in such a way that the users can access<br />
the <strong>Exchange</strong> server allocated to them, as well as access all servers within<br />
a geographical location through a LAN-type connection. WAN connections suffice between the individual<br />
locations.<br />
This decentralized concept has to all intents and purposes several practical advantages:<br />
• The computing performance is available at the location where the user needs it.<br />
• If a system fails, not all the users are affected.<br />
• Data can be replicated, i.e. they are available on several servers.<br />
• Connections to <strong>Exchange</strong> servers in other locations of the company and to worldwide mail systems can<br />
be redundant in design. If a server or a connection fails, <strong>Exchange</strong> automatically looks for another route,<br />
otherwise the most favorably priced one is used.<br />
• The backup data size for the classic backup (compare chapter Backup) is spread over several servers,<br />
the backup can run in parallel.<br />
• User groups with very different requirements (mail volume, data sensitivity, etc.) can be separated from<br />
each other.<br />
But there are also disadvantages:<br />
• Administration personnel is required at every geographical location for backup and hardware<br />
maintenance.<br />
• Depending on the degree of geographical spread, more hardware - particularly for backup - is<br />
necessary.<br />
• If a great many small servers are used in contrast to a few large server systems, higher costs are<br />
incurred for software licenses.<br />
Consolidation<br />
In particular these decentralization disadvantages that affect the Total Cost of Ownership (TCO) result - in<br />
times of sinking company sales and increasingly difficult market conditions - in the request for consolidation<br />
in <strong>Exchange</strong> environments.<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> will do justice to this trend toward consolidation. In addition to the classic platform<br />
consolidation (use of fewer, but larger servers), consolidation of locations is also made possible. Sinking<br />
costs for WAN lines and intelligent client-side caching, as provided by Outlook <strong>2003</strong>, are prerequisite for this<br />
consolidation approach. Where geographically distributed servers were still required with <strong>Exchange</strong> 5.5 or<br />
<strong>Exchange</strong> 2000 <strong>Server</strong>, it is now possible in many scenarios to reduce the locations with <strong>Exchange</strong> <strong>Server</strong><br />
<strong>2003</strong>.<br />
In turn, several larger <strong>Exchange</strong> servers at one location provide the opportunity of combining the servers to<br />
form a cluster. Thus, in the event of hardware failure other server hardware can help out. In a very much<br />
decentralized scenario the hardware expenditure required for this would not be justified.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 13 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
A modern infrastructure based on <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> will in comparison with an <strong>Exchange</strong> 5.5<br />
environment consist of considerably less servers and above all fewer locations. Also in comparison with<br />
<strong>Exchange</strong> 2000 <strong>Server</strong> a reduction in the locations is conceivable, because with the help of the cached<br />
mode of Outlook <strong>2003</strong> and optimization of the communication between Outlook <strong>2003</strong> and <strong>Exchange</strong> <strong>Server</strong><br />
<strong>2003</strong> a substantial reduction in the required network bandwidth is achieved.<br />
Dedicated servers<br />
In addition to e-mail, <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> also offers a number of other components. It may therefore be<br />
practical to distribute various tasks over dedicated <strong>Exchange</strong> servers. Thus a distinction is made between<br />
the following <strong>Exchange</strong> server types:<br />
The mailbox server - also frequently known as back-end server - houses the user mailboxes and is<br />
responsible for delivering the mail to the clients by means of a series of different protocols, such as MAPI,<br />
HTTP, IMAP4 or POP3.<br />
The public folder server is dedicated to public folders, which are brought to the end user by means of<br />
protocols, such as MAPI, HTTP, HTTP-DAV or IMAP4.<br />
A connector server is responsible for various connections to other <strong>Exchange</strong> sites or mail systems. In<br />
this regard, standard protocols, such as SMTP (Simple Mail Transfer Protocol), or X.400 can be used, or<br />
proprietary connectors to mail systems, such as Lotus Notes or Novell GroupWise. Such a dedicated<br />
server should then be used if a connection type is used very intensively.<br />
The term front-end server is used for a server which talks to the clients and passes on the requests of<br />
the client to a back-end-server, which typically houses the mailboxes and public folders. Such a<br />
staggered scenario of front-end and back-end servers is frequently implemented for web-based client<br />
accesses - Outlook Web Access (OWA).<br />
Moreover, a distinction is made between so-called real-time collaboration servers, as well as data<br />
conferencing servers, video conferencing servers, instant messaging servers and chat servers,<br />
which accommodate one or more of these <strong>Exchange</strong> components in a dedicated manner. (Note: The<br />
real-time collaboration features contained in <strong>Exchange</strong> 2000 <strong>Server</strong> have been removed from <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> and incorporated in a dedicated Microsoft product »Live Communications <strong>Server</strong> <strong>2003</strong>« for<br />
real-time communication and collaboration.)<br />
Below, we will turn our attention to the most frequently used <strong>Exchange</strong> server type, the mailbox server, which<br />
houses the users‟ mailboxes and public folders.<br />
Active Directory and DNS<br />
<strong>Exchange</strong> 2000 <strong>Server</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is completely integrated into the Windows Active<br />
Directory. Different to previous <strong>Exchange</strong> versions, such as 4.0 and 5.5, the information about mail users<br />
and mail folders and is no longer integrated in <strong>Exchange</strong>, but is integrated in the Active Directory. <strong>Exchange</strong><br />
makes intensive use of the Active Directory and DNS (Domain Name System). This must be taken into<br />
consideration with a complete design of an <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> infrastructure, i.e. as well as <strong>Exchange</strong><br />
servers with adequate performance, Active Directory servers with an adequate performance are also<br />
required, as this could otherwise have a detrimental effect on the <strong>Exchange</strong> performance. As Active<br />
Directory typically mirrors the organizational structure of a company, organizational and geographical<br />
realities must also be taken into consideration in the design. For performance reasons, apart from small<br />
installations in the small business sector or branch offices, the Active Directory and <strong>Exchange</strong> may not be<br />
installed on the same servers because the amount of processor and memory capacity required by an Active<br />
Directory is substantial. Whereas the necessary disk storage is quite moderate for the Active Directory, a<br />
substantial computing performance is required for the administration and processing of the accesses to the<br />
Active Directory.<br />
In larger <strong>Exchange</strong> environments the <strong>Exchange</strong> server should not simultaneously assume the role of a<br />
domain controller, but dedicated domain controllers should be used. In this respect, the sizing of the domain<br />
controllers is at least as complex as the sizing of <strong>Exchange</strong> servers. Since it would be fully beyond the scope<br />
of this white paper, the topic of sizing the Active Directory cannot be discussed any further here. Helpful<br />
information as to the design and the sizing of the Active Directory can be found at Windows <strong>Server</strong> <strong>2003</strong><br />
Active Directory [L17].<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 14 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Operating system<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> can not only be operated on the basis<br />
<strong>Exchange</strong><br />
of Windows 2000 <strong>Server</strong> but also on Windows <strong>Server</strong> <strong>2003</strong>.<br />
5.5 2000 <strong>2003</strong><br />
Conversely, however, an <strong>Exchange</strong> 2000 <strong>Server</strong> cannot be<br />
operated on Windows <strong>Server</strong> <strong>2003</strong>. The table opposite shows<br />
which version of <strong>Exchange</strong> can be used on which operating<br />
system. It should be noted here that <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Windows<br />
2000<br />
<strong>2003</strong> 32-bit<br />
<strong>2003</strong> 64-bit<br />
× × ×<br />
×<br />
only runs on the 32-bit versions of Windows. It is not possible to install <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> on a 64-bit<br />
version of Windows <strong>Server</strong> <strong>2003</strong>. 64-bit support will only be available with the next version of <strong>Exchange</strong><br />
<strong>Server</strong> 2007.<br />
Many new functionalities of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> are, however, only available if <strong>Exchange</strong> is operated on<br />
the basis of Windows <strong>Server</strong> <strong>2003</strong>. This includes in particular performance-relevant features, such as:<br />
• Memory tuning with /3GB<br />
The /3GB switch is also already available under Windows <strong>Server</strong> <strong>2003</strong> in the standard edition and<br />
causes a shift in the distribution of the virtual address space to the advantage of application at a ratio of<br />
3:1. Under Windows 2000 this option was only supported by Advanced <strong>Server</strong>.<br />
• Memory tuning with /USERVA switch<br />
The switch /USERVA is used to fine tune memory distribution in connection with the /3GB switch. This<br />
option allows the operating system in the kernel area to create larger administration tables with<br />
nevertheless almost 3 GB virtual address space being made available to the application.<br />
• Data backup with Volume Shadow Copy Service (VSS)<br />
This functionality of Windows <strong>Server</strong> <strong>2003</strong> enables snapshot backups of the <strong>Exchange</strong> databases to be<br />
generated during ongoing operation of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. Further details are described in the<br />
chapter Backup.<br />
• Support of 8-node cluster<br />
On the basis of Windows <strong>Server</strong> <strong>2003</strong><br />
Enterprise Edition Clusters with up to eight<br />
nodes can be implemented. In contrast to<br />
Windows 2000 <strong>Server</strong>, where Advanced<br />
<strong>Server</strong> only supports two nodes and<br />
Datacenter <strong>Server</strong> four nodes.<br />
Number of<br />
cluster nodes<br />
Windows 2000 Advanced <strong>Server</strong> 2<br />
Windows 2000 Datacenter <strong>Server</strong> 4<br />
Windows <strong>Server</strong> <strong>2003</strong>, Enterprise Edition 8<br />
Windows <strong>Server</strong> <strong>2003</strong>, Datacenter Edition 8<br />
• Support of Mount Points<br />
Windows <strong>Server</strong> <strong>2003</strong> allows disk volumes to be added to an existing volume, instead of providing them<br />
with a separate drive letter, thus enabling the given limit of max. 26 possible drive letters to be<br />
overcome - which represented a bottleneck in clustered environments in particular.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 15 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Computing performance<br />
It is obvious that the more powerful the processor, the more processors in a system, the faster the data are<br />
processed. However, with <strong>Exchange</strong> the CPU performance is not the only decisive factor. The Microsoft<br />
<strong>Exchange</strong> server provides an acceptable performance even with a relatively small CPU configuration. In fact<br />
it is important that a fast disk subsystem and an adequate memory configuration are available and, of<br />
course, that the network connection is not a bottleneck. It becomes especially evident with small server<br />
systems that the processor performance is not the restricting factor, but the configuration options of the disk<br />
subsystem.<br />
The diagram opposite can be used as a guideline for the<br />
number of processors. There are no strict limits defining the<br />
number of processors. All systems are available with<br />
4<br />
processors of different performance. Thus a 2-way system with<br />
highly clocked dual processors and a large cache can by all<br />
means be within the performance range of a poorly equipped 4- 1<br />
2<br />
way system. Which system is used with regard to possible<br />
expandability ultimately depends on the forecast of customer‟s<br />
500 1000 2000 3000 4000 User<br />
growth rates. In addition, it could prove advantageous to also use a 2-way or 4-way system for a smaller<br />
number of users, as this type of system frequently offers better configuration options for the disk subsystem.<br />
As far as pure computing performance is concerned, 4-way systems are generally adequate for <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong>, because with large <strong>Exchange</strong> servers it is not computing performance but <strong>Exchange</strong>-internal<br />
memory administration that sets the limits. However, it may still make sense to use an 8-way system if very<br />
CPU-intensive <strong>Exchange</strong> extensions are used in addition to the pure <strong>Exchange</strong> server or if with regard to<br />
greater availability the use of Windows <strong>Server</strong> <strong>2003</strong>, Datacenter Edition is taken into consideration.<br />
With more than approximately 5000 active users on a server the scaling of <strong>Exchange</strong> <strong>Server</strong> is not<br />
satisfactory (see Main memory). Therefore, for large numbers of users a scale-out scenario, in which several<br />
servers in a logical array can serve an arbitrarily large number of mail users, should be considered instead of<br />
a scale-up scenario.<br />
Main memory<br />
As far as the main memory is concerned, there is a quite simple rule: the more the better. Firstly, the main<br />
memory of the server must at least be large enough for the system not to be compelled to swap program<br />
parts from the physical memory into the virtual memory onto the hard disk. Otherwise this would hopelessly<br />
slow down the system. As far as the program code is concerned, 512 MB would generally suffice. The<br />
system would then run freely and no program code would have to be swapped out to the hard disk.<br />
If, however, more memory is available, it is then used by <strong>Exchange</strong> as a cache for the data from the<br />
<strong>Exchange</strong> database, the so called store. This leads to what could be classed as a substantial load relief for<br />
the disk subsystem and thus a gain in performance. After all, accesses to the memory are about 1000 times<br />
faster than accesses made to the hard disks.<br />
But unfortunately there are also limits here. On the<br />
one hand 4 GB RAM is a magical limit. It is not<br />
possible to address any more with a 32-bit<br />
address. Windows 2000 and <strong>2003</strong> have<br />
mechanisms for overcoming this limit, the so-called<br />
Physical Address Extension (PAE). Depending on<br />
the version, Windows supports up to 128 GB RAM.<br />
From a hardware viewpoint, it would also be<br />
unproblematically possible to provide this memory<br />
in the PRIMERGY RX600 S3 or RX800 S2, but<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> does not support<br />
these PAE address types and is as regards<br />
addressing limited to 4 GB RAM. Further<br />
information is available at Physical Address<br />
Extension - PAE Memory and Windows [L18].<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 16 (69)<br />
# CPU<br />
#<br />
CPU<br />
RAM<br />
[MB]<br />
/3GB<br />
Support<br />
Windows 2000 4 4 -<br />
Windows 2000 Advanced 8 8 �<br />
Windows 2000 DataCenter 32 32 �<br />
Windows <strong>2003</strong> Standard 4 4 �<br />
Windows <strong>2003</strong> Enterprise 8 32 �<br />
Windows <strong>2003</strong> DataCenter 32 64 �<br />
Windows <strong>2003</strong> Standard SP1 and R2 4 4 �<br />
Windows <strong>2003</strong> Enterprise SP1 and R2 8 64 �<br />
Windows <strong>2003</strong> DataCenter SP1 and R2 64 128 �
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
A further reduction in the effective memory results from the system architecture of Windows <strong>Server</strong> <strong>2003</strong>.<br />
Thus this address space is divided by default into 2 GB for the operating system and 2 GB for applications,<br />
to which <strong>Exchange</strong> is also classed. By means of an appropriate configuration parameter it is possible to shift<br />
this split to 1:3 GB in favor of the applications. It is advisable to use this so-called /3GB option as early as a<br />
physical memory configuration of 1 GB RAM. (For better understanding: this /3GB option refers to the<br />
administration of virtual memory addresses, not to the physical memory.) The option /USERVA, with which<br />
the distribution of the address space at 3:1 can be controlled in a more detailed way, was introduced with<br />
Windows <strong>Server</strong> <strong>2003</strong> as a supplement to the /3GB option.<br />
The necessity of the /3GB switch for <strong>Exchange</strong> becomes clearer under the effect that <strong>Exchange</strong> obviously<br />
requires about twice the number of virtual addresses in proportion to the physically used memory, which<br />
Microsoft describes only indirectly. This effect can be explained from a program point of view and even<br />
regarded as a good or at least modern programming style because of the underlying methodology. Based on<br />
the fact that with 32-bit systems the limits of available virtual addresses and physically available memory are<br />
coming closer and on the fact that even the physical memory exceeds the addressable memory, this memory<br />
administration architecture represents an (unnecessary) limitation.<br />
In other words, it could be assumed that an IA64-bit system, such as the PRIMERGY RXI600, is the optimal<br />
system for <strong>Exchange</strong>. Reality, however, is that a 64-bit architecture would be optimal for the internal memory<br />
administration of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, but not for the other components of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. Thus<br />
there is at present no 64-bit version of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> so that despite the almost infinite virtual<br />
memory space of 8 TB (terabyte), which is provided by 64-bit Windows to applications, <strong>Exchange</strong> would from<br />
today's perspective not run faster in the present version on an IA64-bit system, but more slowly.<br />
But let's return to the current hardware options. At a rough estimate: 3 GB of virtual address space for<br />
<strong>Exchange</strong>, of which at most half can be used physically, results in 1.5 GB. In fact, Microsoft has limited the<br />
ESE cache size for the store to about 900 MB. According to a Microsoft description, this may be increased to<br />
1.2 GB. Please note: This memory requirement is only for the store cache. It goes without saying that<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> uses additional memory for other data. With 2 GB RAM an <strong>Exchange</strong> server is<br />
already quite well equipped. What lies above a memory configuration of 3 GB is only used conditionally by<br />
<strong>Exchange</strong>. However, a configuration with further memory of up to 4 GB can be very practical if other<br />
components, such as a virus scanner or fax services, are to be added in addition to <strong>Exchange</strong>.<br />
In addition to these considerations as to the<br />
maximum (sensibly) effective memory, there are<br />
also guidelines based on empirical values, which<br />
calculate memory configuration on a user basis.<br />
Microsoft recommends you to calculate 512 kB<br />
RAM per active user. If you assume a base of<br />
512 MB for the operating system and the core<br />
components, the outcome is the linear lower<br />
curve opposite<br />
RAM = 512 MB + 0.5 MB * [number of users]<br />
If this is reflected against the limitations discussed above, it is also possible to read an upper limit of users<br />
that an individual <strong>Exchange</strong> server can serve: approximately 5000 users. This is not a fixed limit. It is by no<br />
means so that the <strong>Exchange</strong> server crashes with a higher number of users, it is simply no longer able to<br />
function efficiently with a high load. On account of a lack of memory the disk subsystem is put under a<br />
greater load. Due to higher throughput times the load placed on the CPU is ultimately also increased. More<br />
jobs have to be managed in the queues, this in turn causes a higher administration outlay and greater<br />
memory requirement, which is ultimately at the expense of the cache memory. Thus a process escalates in<br />
such a way that in the end all users can no longer be adequately served.<br />
In addition, practical experience has shown that »small« systems in particular benefit from a somewhat<br />
larger memory configuration (see upper curve).<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 17 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Disk subsystem<br />
Practical experience shows that <strong>Exchange</strong> systems are frequently oversized as far as CPU performance is<br />
concerned, but are hopelessly undersized when it comes to the disk subsystem. Thus the overall system<br />
performance is totally dissatisfactory and that is why we intend to handle the disk subsystem topic<br />
intensively. In the following chapters we will see that system performance (with regard to <strong>Exchange</strong>) is<br />
frequently not determined by the CPU performance but by the connection options for a disk subsystem.<br />
An <strong>Exchange</strong> server that is primarily used as a mailbox server needs a large amount of memory in order to<br />
efficiently administer the mailbox contents. The internal hard disks of a server are as a rule inadequate and<br />
an external disk subsystem is needed. There are currently four approaches to the topic of disk subsystems:<br />
Direct Attached Storage (DAS) is the name of a storage technology through which the hard disks are<br />
connected directly to one or more disk controllers installed in the server. Typically, SCSI, SAS or SATA is<br />
used in conjunction with intelligent RAID controllers. These controllers are relatively cheap and offer good<br />
performance. The hard disks are either in the server housing or in the external disk housing, which are<br />
basically used to accommodate disks and the power supply.<br />
DAS offers top-class performance and is a good choice for small and medium-sized <strong>Exchange</strong><br />
installations. However, for large-scale <strong>Exchange</strong> installations, DAS has some limitations as regards<br />
scaling. The physical disks must all be connected directly to the server through SCSI cabling. The<br />
number of disks per SCSI bus is also limited as is the number of possible SCSI channels in one system.<br />
This results in limits regarding the maximum configuration. Further DAS disadvantages are the extensive<br />
and thus error-prone SCSI cabling as well as the fact that clusters are not fully supported. All servers<br />
integrated in a cluster must be able to access a common data pool; in contrast DAS requires a dedicated<br />
disk allocation.<br />
Under the aspect of these limitations, the networks of Network Attached Storage (NAS) or Storage Area<br />
Network (SAN) appear considerably more attractive. Both concepts are based on the idea that the disk<br />
subsystem must be detached from the server and made available as a separate unit in a network to one<br />
or more servers. Vice versa, a server can also access several storage units.<br />
Network Attached Storage (NAS) is principally a classical file server. Such an NAS server is specialized<br />
in the efficient administration of large data quantities and provides this storage to other servers through a<br />
LAN. Internally, NAS servers typically use the DAS disk and controller technology. Classical LAN<br />
infrastructures are used for the data transport to and from the servers. Consequently, NAS systems can<br />
be constructed at reasonable prices. As the data storage is not allocated on a dedicated basis to one<br />
server, the use of many NAS servers reaches high scaling levels.<br />
Classic NAS topology is basically unsuited for <strong>Exchange</strong> 2000 <strong>Server</strong> and <strong>2003</strong>. <strong>Exchange</strong> uses an<br />
installable file system (IFS), which requires access to a block-oriented device. The IFS driver is an<br />
integral component of the <strong>Exchange</strong> 200x architecture and is used for internal <strong>Exchange</strong> processes. If the<br />
<strong>Exchange</strong> database is filed on a non-block-oriented device, the <strong>Exchange</strong> 200x database cannot be<br />
mounted (cf. Microsoft Knowledge Base Article Q317173).<br />
However, in addition to the classic file sharing through NFS and CIFS, modern NAS systems also provide<br />
special disk drivers, which make the NAS visible by means of a block-oriented device for Windows 200x.<br />
If this is the case, an NAS can also be used in conjunction with <strong>Exchange</strong> 200x.<br />
Storage Area Network (SAN) is currently the most<br />
innovative technology in the fast-growing storage market.<br />
In contrast to NAS, SAN does not use LAN for data<br />
transport, but its own network of high bandwidth based on<br />
Fibre Channel (FC). The conversion from LAN protocol to<br />
SCSI required with NAS is not needed with Fibre<br />
Channels as Fibre Channels such as SCSI use the same<br />
Copper Glass fiber<br />
MMF SMF<br />
62.5 µm 50 µm 9 µm<br />
1 GB FC 10 m 175 m 500 m 10 km<br />
2 GB FC - 90 m 300 m 2 km<br />
4 GB FC - 45 m 150 m 1 km<br />
data protocol. However, Fibre Channel is not subject to the physical restrictions of SCSI. Thus, in contrast<br />
to SCSI, where the cable length is restricted to 25 meters, cable lengths of up to 10 kilometers are<br />
possible depending on the cable medium and bandwidth. Fibre Channels also offer much greater scope<br />
with regard to the number of devices. In contrast to the max. 15 devices on a SCSI channel, Fibre<br />
Channel enables through a so-called arbitrated loop (FC-AL) up to 126 devices, and this limit can also be<br />
increased by using Fibre Channel switches. In a Storage Area Network all the servers and storage<br />
systems are connected to each other and can thus access a large data pool. A SAN is thus ideal for<br />
cluster solutions where several servers share the same storage areas in order to take over the tasks of a<br />
server when a server fails. A SAN is an ideal solution for large or clustered <strong>Exchange</strong> installations.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 18 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Internet small computer system interface (iSCSI), described as RFC3270 by the »Internet Engineering<br />
Task Force« (IETF), is, in addition to Fibre-Channel (FC) with its completely own infrastructure,<br />
increasingly gaining in significance. This concept is based on the idea of separating the disk subsystem<br />
from the server and making it available to one or more servers as a separate unit in a network.<br />
Conversely, it is also possible for a server to access several storage units. In contrast to most Network<br />
Attached Storage (NAS) products, which make the protocols <strong>Server</strong> Message Block (SMB) or Common<br />
Internet File System (CIFS) - known from the Microsoft world - or the Network File System (NFS) - known<br />
from UNIX / Linux - available through a LAN, block devices are made available in the server both by<br />
iSCSI and by Fibre-Channel. Some applications, e.g. <strong>Exchange</strong> <strong>Server</strong>, need block-device interfaces for<br />
their data repository. For these applications it is not possible to see whether they access a directly<br />
connected disk subsystem or whether the data are to be found somewhere in the network. Unlike Fibre-<br />
Channel with its complex infrastructure with special controllers (Host Bus Adapters or HBA), separate<br />
cabling, separate switches and even separate management, iSCSI accesses the infrastructure known<br />
from TCP/IP – hence the designation »IP-SAN«. As a result of using existing infrastructures, the initial<br />
costs with iSCSI are lower than in the Fibre-Channel environment. See also Performance Report - iSCSI<br />
and iSCSI Boot [L4].<br />
Transaction principle<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> works in a transaction-oriented way and stores all data in databases (the socalled<br />
store). <strong>Exchange</strong> 2000 <strong>Server</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> support up to 20 separate databases,<br />
which can be structured in four so-called storage groups, each with a maximum of five databases. A joint<br />
transaction log file is written per storage group. Compared with <strong>Exchange</strong> 5.5 with only one storage group<br />
and database, this architecture has a number of advantages and thus overcomes a number of limitations.<br />
Here are some of the most important advantages that are of interest for our sizing considerations:<br />
• A database is restricted to one logical volume. The volume size is limited by the disk subsystem.<br />
Thus, it is only possible to combine a maximum of 32 disks into a volume with many RAID controllers.<br />
This limitation can be overcome by using several databases.<br />
• One backup process is possible per storage group; as a result the backup process can be effected in<br />
parallel by using several storage groups and thus optimized as regards time. Prerequisite for this is of<br />
course adequate backup hardware.<br />
• Under the aspect of availability, the times for the restoring of a database after its loss are critical. As a<br />
result of the assignment among several databases it is possible to reduce this restore time.<br />
• Sensitive data can be physically separated by using different databases and storage groups. This is<br />
particularly interesting when an ASP (application service provider) wants to serve several customers<br />
on one <strong>Exchange</strong> server.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 19 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Access pattern<br />
Database administration activities result in two completely complementary data access patterns. On the one<br />
hand the database with 100% random access. In this respect, 2/3 are typically accounted for by read and 1/3<br />
by write accesses. On the other hand, a data flow which is written 100% in sequence arises as a result of the<br />
transaction log files. To do justice to this it is advisable to spread the databases and log files over different<br />
physical disks.<br />
A second aspect as regards the physical separation of log files and databases: In the transaction concept all<br />
changes to the database are recorded in the log files. If the database is lost, it is possible to completely<br />
restore the database using a backup and the log files since the backup was generated. In order to achieve<br />
maximum security it is sensible to store the log files and the database on physically different hard disks so<br />
that all data are not lost in the event of a disk crash. As long as only one of the two sets of information is lost,<br />
the missing information can be restored. This is particularly valid for small <strong>Exchange</strong> installations where the<br />
lack of a large number of hard disks encourages the storing of both sets of information on one data medium.<br />
Caches<br />
For an intelligent SCSI controller with its own cache mechanisms there are further opportunities of adapting<br />
the disk subsystem to the requirements of the <strong>Exchange</strong> server. Thus the write-back cache should be<br />
activated for the volume on which the log files are to be found. Read-ahead caches should also be activated<br />
for the log files; this has advantages during restore where log files have to be read. The same applies to the<br />
volume on which queues (SMTP or MTA queues) are filed.<br />
However, for the volume at which the store is filed it is not practical to activate the read-ahead cache. It may<br />
sound illogical to deactivate a cache that is used to accelerate accesses. However, the store is a database of<br />
many Gigabytes, to which random access in blocks of 4 kB is made. The probability of a 4-kB block from an<br />
infinitely large data amount being found in a cache of a few MB and not having to be read from the disk is<br />
very unlikely. With some controllers it is unfortunately not possible to deactivate the read cache<br />
independently of the write cache and also when reading, a check is first made as to whether the data<br />
requested are available in the cache. In this case, better overall throughput is achieved by deactivating the<br />
cache (except for with RAID 5) as read accesses are typically twice as frequent as write accesses.<br />
In addition, each individual hard disk also provides write and read caches. As standard the read cache is<br />
always activated. A lengthy discussion is going on about the activation of the write cache, because in<br />
contrast to the write caches of the RAID controller this cache does not have a battery backup. On condition<br />
that the server (and of course the disk subsystem) is UPS-protected, it may be sensible to activate the write<br />
caches of the hard disk. All hard disks from <strong>Fujitsu</strong> are for security reasons supplied with the write caches<br />
deactivated. Some of our competitors supply their hard disks with activated write caches with the result that<br />
at first sight such systems appear to have a better performance in comparative testing. However, if a system<br />
is not protected against power failures by a UPS, or if it is deactivated abruptly, data losses can occur on the<br />
activated write caches of the hard disks.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 20 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
RAID levels<br />
One of the most error-prone components of a<br />
computer system is the hard disk. It is a<br />
mechanical part and is above all extensively<br />
used for database-based applications, to which<br />
<strong>Exchange</strong> is also classed. It is therefore<br />
important to be prepared for the failure of such<br />
a component. To this end, there are methods<br />
of arranging several hard disks in an array in<br />
such a way that it is possible to cope with the<br />
failure of a hard disk. This is known as a<br />
Redundant Array of Independent Disks or<br />
RAID in short. Below is a brief overview of the<br />
most important RAID levels.<br />
The figure illustrates how the blocks of a<br />
dataflow<br />
A B C D E F ...<br />
are organized on the individual disks.<br />
RAID 0 RAID level 0 is also denoted as the »Non-redundant striped<br />
array«. With RAID 0 two or more hard disks are combined,<br />
solely with the aim of increasing the write/read speed. The<br />
data are split up in small blocks with a size of between 4<br />
and 128 kbytes, so-called stripes, and are stored<br />
alternatingly on the disks. In this way, it is possible to<br />
access several disks at the same time, which in turn<br />
increases the speed. Since no redundant information is generated with RAID 0, all data in the<br />
RAID 0 array are lost even if only one hard disk fails. RAID 0 offers the fastest and most<br />
efficient access, but is only suitable for data that can be regenerated without any problems at all<br />
times.<br />
RAID 1 With RAID 1, also known as »drive duplexing« or »mirroring«, identical<br />
data are stored on two hard disks and this results in a redundancy of<br />
100%. In addition, alternating access also can increase the read<br />
performance. If one of the two hard disks fails, the system then continues<br />
to work with the remaining hard disk without interruption. RAID 1 is first<br />
choice in performance-critical, error-tolerant environments. Moreover, there<br />
is no alternative to RAID 1 when error tolerance is called for, but not more<br />
than two disks are required or available. However, the high failsafe rate has its price, namely<br />
twice the number of hard disks are necessary.<br />
RAID 5 In a RAID 5 array at least three hard disks are required.<br />
Similar to RAID 0 the dataflow is split into blocks. Parity<br />
information is formed across the individual blocks and this<br />
is stored in addition to the data on the RAID array, whereby<br />
the information itself and the parity information is always<br />
written on two different hard disks. If one hard disk fails,<br />
restoration of the data can be effected with the aid of the<br />
RAID 0<br />
remaining parity information. The wastage caused as a result of the additional parity information<br />
sinks with the number of hard disks used and comes to 1 /(number of disks). A simple rule of thumb is<br />
one disk of wastage per RAID 5 array. RAID 5 offers redundancy and budgets disk resources<br />
best of all. However, the cost is the parity calculation performance. Even special RAID<br />
controllers cannot compensate this.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 21 (69)<br />
A<br />
E<br />
I<br />
RAID 5<br />
A<br />
D<br />
G<br />
B<br />
F<br />
J<br />
B<br />
E<br />
P(GHI)<br />
C<br />
G<br />
K<br />
RAID 1<br />
A<br />
B<br />
C<br />
C<br />
P(DEF)<br />
H<br />
D<br />
H<br />
L<br />
A’<br />
B’<br />
C’<br />
P(ABC)<br />
F<br />
I
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
RAID 1+0 Also occasionally referred to as RAID 10. Actually it is<br />
not a separate RAID level, but merely RAID 0<br />
combined with RAID 1. Thus the features of the two<br />
basic levels - security and sequential performance -<br />
are combined. RAID 1+0 uses an even number of<br />
hard disks. Two disks are combined and the data are<br />
mirrored (RAID 1). The data are distributed over this<br />
pair of disks (RAID 0). RAID 1+0 is particularly suited<br />
for the redundant storing of large files. Since no parity<br />
has to be calculated in this case, write access with<br />
RAID 1+0 is very fast.<br />
RAID 0+1 In addition to the RAID level combination 1+0 there is<br />
also the combination 0+1. For half the hard disks a<br />
RAID 0 array is formed and the information is then<br />
mirrored onto the other half (RAID 1). As regards<br />
performance RAID 0+1 and 1+0 are identical.<br />
However, RAID 1+0 has a higher degree of<br />
availability than RAID 0+1. If a disk fails with RAID<br />
0+1, redundancy is no longer given. With a RAID 1+0,<br />
however, further disks may fail as long as the same<br />
RAID 1 pair is not affected. The likelihood of both<br />
disks of a RAID 1 pair failing in a RAID 1+0 consisting<br />
RAID 1+0<br />
RAID 0<br />
of n disks 2 /(n²-n) is considerably less than the probability of two disks that do not belong to a<br />
pair being affected (2n-4) /(n²-n).<br />
Others There are a number of other RAID levels that are in part no longer in use today, or other<br />
combinations, such as RAID 5+0.<br />
More information about the different RAID levels is to be found in the white paper Performance Report –<br />
Modular RAID [L5].<br />
For all RAID levels care must be taken that hard disks of the same capacity and same performance are<br />
used. Otherwise, the smallest hard disk determines the overall capacity or the slowest hard disk the overall<br />
performance. The performance of the RAID array is on the one hand determined by the RAID level used, but<br />
also by the number of disks in an array. Even the RAID controllers themselves show differing performances<br />
particularly for more complex<br />
RAID algorithms such as<br />
RAID 5. Even parameters<br />
such as block and stripe size,<br />
which have to be defined<br />
when setting up the RAID<br />
array also ultimately influence<br />
the efficiency of a RAID<br />
array. The diagram opposite<br />
shows the relative<br />
performance of various RAID<br />
arrays.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 22 (69)<br />
A<br />
E<br />
A’<br />
E’<br />
RAID 0+1<br />
A<br />
E<br />
A’<br />
E’<br />
B<br />
F<br />
B’<br />
F’<br />
B<br />
F<br />
B’<br />
F’<br />
RAID 0<br />
C<br />
G<br />
C’<br />
G’<br />
C<br />
G<br />
C’<br />
G’<br />
D<br />
H<br />
D’<br />
H’<br />
D<br />
H<br />
D’<br />
H’<br />
RAID 1<br />
RAID 1
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Data throughput<br />
Current SCSI RAID controllers provide a data throughput of 320 MB/s per SCSI channel. This is more than<br />
adequate for database-oriented applications, even if 7 or more hard disks are operated on one channel.<br />
Frequently with regard to the number of disks on an SCSI channel the maximum data transfer rate of the<br />
SCSI bus is mistakenly divided by the peak data transfer rate of the hard disk. With fast hard disks this may<br />
very well be over 80 MB/s, according to which an SCSI bus would then already be at full capacity with four<br />
hard disks. However, this calculation is incorrect as it only applies to a theoretical extreme situation. The<br />
diagram below shows the real situation. It can be seen that the theoretical curve is roughly achieved only<br />
with purely sequential read<br />
with large data blocks, as is<br />
the case with video streaming.<br />
If write operations are<br />
added, the data throughput<br />
slumps decidedly. For random<br />
access with block<br />
sizes of 4 and 8 kB, as is<br />
the case with access to the<br />
<strong>Exchange</strong> database, the<br />
throughput is only just<br />
approximately 10 MB/s.<br />
This means that the<br />
maximum possible number<br />
of hard disks can be run on<br />
one SCSI bus without any<br />
trouble.<br />
SCSI RAID controllers<br />
provide up to 4 SCSI buses. Consequently the possible<br />
throughput adds up in principle. Therefore it is<br />
important for such controllers to also be used in an<br />
adequate PCI slot. The table opposite shows the<br />
various PCI bus speeds and the throughputs<br />
ascertained with them. However, the data throughputs<br />
also depend on the type and number of the controllers<br />
used as well as on the memory interface (chip set) of<br />
the server.<br />
In order to harmonize the controller type and ensure its<br />
number with the server, each of these is tested and<br />
certified for the individual systems by <strong>Fujitsu</strong>. In this<br />
connection, the system configurator determines which<br />
and how many controllers can be sensibly used per<br />
system.<br />
PCI Bus Throughput in MB/s<br />
theoretical measured<br />
PCI 33 MHz, 32-bit 133 82<br />
PCI 33 MHz, 64-bit 266 184<br />
PCI-X 66 MHz, 64-bit 533 330<br />
PCI-X 100 MHz, 64-bit 800 n/a<br />
PCI-X 133 MHz, 64-bit 1066 n/a<br />
PCI-E 2500 MHz, 1× 313 250<br />
PCI-E 2500 MHz, 2× 625 500<br />
PCI-E 2500 MHz, 4× 1250 1000<br />
PCI-E 2500 MHz, 8× 2500 2000<br />
PCI-E 2500 MHz, 16× 5000 4000<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 23 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Hard disks<br />
A major influence on the performance is the speed of the hard disk. In addition to mean access time,<br />
rotational speed in particular is an important parameter here. The faster the disk rotates, the more quickly the<br />
data of a whole track can be transferred; but also the data density of the disk has an influence upon this. The<br />
closer the data are together on the disk, i.e. the more data can be packed into one track, the more data can<br />
be transferred per revolution and without repositioning the heads.<br />
In the SCSI and SAS environment<br />
only disks from the top of the<br />
performance range are offered. In<br />
this way, no hard disks are offered<br />
with less than 10000 rpm<br />
Type<br />
2½" SATA<br />
3½"<br />
rpm<br />
7200<br />
7200<br />
36 60<br />
×<br />
73<br />
Capacity [GB]<br />
80 100 146 160 250 300 500<br />
×<br />
× × × ×<br />
(revolutions per minute) and a seek 2½" SAS 10000 × ×<br />
time (positioning time) greater than 3½" SCSI 10000 × × × ×<br />
6 ms. The table opposite shows the<br />
currently available disk types. Hard<br />
3½" 15000 × × ×<br />
disks with even greater capacities are to be expected in the near future.<br />
The rotational speed of the hard disk is directly reflected in the number of read/write<br />
operations that a disk can process per time unit. If the number of I/O commands that<br />
an application produces per second is known, it is possible to calculate the number<br />
of hard disks required to prevent the occurrence of a bottleneck. In comparison with<br />
5400 rpm<br />
7200 rpm<br />
IO/s<br />
62<br />
75<br />
a hard disk with 10 krpm, a hard disk with 15 krpm shows - depending on the access 10000 rpm 120<br />
pattern - an up to 40% higher performance, particularly in the case of random<br />
accesses with small block sizes as occur with the <strong>Exchange</strong> database. For<br />
15000 rpm 170<br />
sequential accesses with large block sizes that occur in backup and restore processes, the advantage of<br />
15 krpm is reduced to between 10% and 12%.<br />
Moreover, the number of hard disks in an RAID array plays a major role. Thus, for example, eight 36 GB<br />
disks in a RAID 1+0 are substantially faster than two 146 GB disks, although the result is the same effective<br />
capacity. In other words, it is necessary to calculate between the number of available slots for hard disks, the<br />
required disk capacity and ultimately the costs. From a performance point of view, the rule of more small<br />
hard disks rather than less large ones is applicable.<br />
If the <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is placed<br />
under stress using the medium load<br />
profile of LoadSim <strong>2003</strong>, then 0.6 I/Os<br />
per second and user will occur for the<br />
<strong>Exchange</strong> database. The table<br />
opposite shows the required number of<br />
hard disks subject to the number of<br />
users, disk rotational speed and RAID<br />
level. It takes into consideration that<br />
write accesses need two I/O<br />
operations with a RAID 10 and up to<br />
four I/O operations with a RAID 5. If<br />
you also take the typical database access profile for <strong>Exchange</strong> with 2 /3 read and 1 /3 write accesses as the<br />
basis, the I/O rate for a RAID 10 is calculated according to the formula<br />
and the I/O rate for a RAID 5 according to the formula<br />
# Users IO/s RAID 10 RAID 5<br />
# IO # Disks # IO # Disks<br />
10 krpm 15 krpm 10 krpm 15 krpm<br />
50 30 40 2 2 60 3 3<br />
100 60 80 2 2 120 3 3<br />
500 300 400 4 4 600 5 4<br />
1000 600 800 8 6 1200 10 8<br />
2000 1200 1600 14 10 200 20 15<br />
3000 1800 2400 20 16 3600 30 22<br />
4000 2400 3200 28 20 4800 40 29<br />
5000 3000 4000 34 24 6000 50 36<br />
However, it should be noted that the actually required number depends on user behavior: a different user<br />
profile can initiate a different I/O load.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 24 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
As far as data security is concerned, the log files are considerably more important than the database itself.<br />
The reason lies in the fact that the log files record all changes since the last backup. Therefore, the log files<br />
should be protected by a RAID 1 (mirror disk) or RAID 5 (stripe set with parity). For performance reasons a<br />
RAID 1 or rather RAID 1+0 is advisable. Since the log files are automatically deleted during backup, no large<br />
quantities of data accrue with regular backups.<br />
Theoretically, the database should as far as data loss is concerned require no further protection. Here it<br />
would be possible to work without RAID or for performance reasons with RAID 0 (stripe set). However, we<br />
strongly advise against this in practice. If a hard disk fails, this would mean the failure of the <strong>Exchange</strong><br />
servers until the disk is replaced, the last<br />
backup loaded and the restored database<br />
synchronized with the log files.<br />
Depending on database size this may<br />
take hours or even mean a whole<br />
working day. This is impractical for an<br />
increasingly more pivotal medium such<br />
as e-mail. For the database a RAID 5 or<br />
RAID 1+0 should be used. From a<br />
performance point of view a RAID 1+0 is<br />
advisable. However, cost pressure or<br />
max. disk configuration frequently<br />
requires side-stepping to RAID 5. In the<br />
case of small <strong>Exchange</strong> implementations,<br />
for which performance is not to the<br />
forefront, RAID 5 is a good compromise<br />
between performance and costs.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 25 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Storage space<br />
Now that we have intensively discussed the various types and performance of the individual components of<br />
the disk subsystem, a significant question still remains: How much storage space do I need for how many<br />
users? This results again in the classic problem of user behavior. Are all mails administered centrally by the<br />
users on the <strong>Exchange</strong> server or locally in private stores (PST) under Outlook? How large are the individual<br />
mails typically? Yes, even the client used, Outlook, Outlook-Express, or web-based access through a web<br />
browser influences the storage space required by the <strong>Exchange</strong> server.<br />
If the customer has made no specifications, it is possible to take a moderately active Outlook user who is not<br />
idle and who administers his mails on the <strong>Exchange</strong> server as a basis. In this respect, 100 MB per user or<br />
mailbox is very practical value. If your calculation adds a further 100% as space for growth and <strong>Exchange</strong><br />
has adequate working scope, this is then a very good value. The table shows disk requirement for the<br />
database. In the calculation it has to be taken into consideration that a 36 GB disk only has a net capacity of<br />
34 GB. And accordingly 68 GB net for 73 GB, 136 GB for the 146 GB, and 286 GB for the 300 GB disk. In<br />
the RAID 5 calculation a package size of max. 7 disks was taken as a basis. From a performance point of<br />
view it would be<br />
advisable to choose a<br />
package size of 4 or<br />
5. In this respect, the<br />
hard disk requirement<br />
increases by 6 or<br />
11%. As already<br />
mentioned, RAID 5<br />
should be avoided<br />
with from a performance<br />
viewpoint and<br />
preference given to a<br />
RAID 1+0 array.<br />
User with<br />
100 MB<br />
Mailbox<br />
In addition to the hard disks for the database(s) with the user mailboxes, disk requirements for public folders<br />
must also be taken into consideration.<br />
Moreover, hard disks are still required for<br />
the log files. The scope of the log files<br />
depends on the one hand on user<br />
activity and on the other hand on backup<br />
cycles. After a backup the log files are<br />
deleted. A RAID 1 or RAID 1+0 should<br />
be used for the log files. The table<br />
opposite shows the disk requirements for<br />
a log file size of 6 MB per user for three<br />
days‟ storage.<br />
In addition to the disk requirements for<br />
the database and log files, <strong>Exchange</strong> still<br />
GB<br />
net<br />
Logs per user<br />
and day [MB]<br />
Number of Disks RAID 1+0 Number of Disks RAID 5<br />
36 GB 73 GB 146 GB 300 GB 36 GB 73 GB 146 GB 300 GB<br />
50 10 2 2 2 2 3 3 3 3<br />
100 20 2 2 2 2 3 3 3 3<br />
500 100 6 4 2 2 4 3 3 3<br />
1000 200 12 6 4 2 7 4 3 3<br />
2000 400 24 12 6 4 14 7 4 3<br />
3000 600 36 18 10 6 20 11 6 3<br />
4000 800 48 24 12 6 28 14 7 4<br />
5000 1000 60 30 16 8 35 18 10 5<br />
Number<br />
of days<br />
6 3<br />
Number<br />
of users<br />
Logfile<br />
GB net<br />
Number of Disks<br />
RAID 1+0<br />
36 GB 73 GB 146 GB<br />
50 1 2 2 2<br />
100 2 2 2 2<br />
500 9 2 2 2<br />
1000 18 2 2 2<br />
2000 36 4 2 2<br />
3000 54 4 2 2<br />
4000 72 6 4 2<br />
5000 90 6 4 2<br />
needs storage space for queues. Queues can occur when mails cannot be immediately delivered, e.g. when<br />
other mail servers cannot be reached or a database is off-line because of a restore. Queues are typically<br />
written and read on a sequential basis. Separate storage space should also be provided for this. The data<br />
volume can be estimated analog to log file requirements from the average mail volume per user and the<br />
anticipated maximum downtime of the components causing the queue.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 26 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Network<br />
Network quality represents an important performance factor. For example, an overloaded Ethernet segment,<br />
in which many collisions occur, has a significant influence on performance. It is advisable to connect the<br />
<strong>Exchange</strong> <strong>Server</strong> - depending on data volume - to the backbone through a 100-Mbit Ethernet or a gigabit<br />
Ethernet.<br />
If the backup is not implemented on the server on a dedicated basis but centrally through the network,<br />
consideration must then be made for the appropriate bandwidth. In case of an on-line backup - which is the<br />
recommended backup method, see chapter Backup - the <strong>Exchange</strong> backup API provides a data throughput<br />
of approximately 70 GB/h which is equivalent to about 200 Mbit/s.<br />
For a user as described in the medium profile (see chapter User profiles) an average data volume of<br />
5 kbit/s/user is to be expected. In addition to the pure data volume, consideration must be made for the fact<br />
that the network is loaded differently depending on the protocol used. Thus the MAPI protocol induces many<br />
small network packets which place a greater load on the network than the few small ones that occur with the<br />
IMAP protocol.<br />
High availability<br />
Availability plays a major role in large-scale <strong>Exchange</strong> server installations. If several 1000 users depend on<br />
one single server, uncontrolled downtimes may mean sizable damage. In this case, it is recommended to<br />
implement availability through a cluster solution, as offered by the Windows Cluster Services of Windows<br />
<strong>Server</strong> <strong>2003</strong> Enterprise Edition and Windows <strong>Server</strong> <strong>2003</strong> Datacenter Edition. Entirely different restrictions<br />
as regards disk subsystem and performance apply to such clusters.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 27 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup<br />
Backup is one of the most important components for<br />
safeguarding data. With regard to high-availability hardware<br />
resources there could be a tendency to neglect the backup of<br />
mirrored data. Studies, however, show that only about 10% of<br />
data loss is due to hardware and environmental influences. The<br />
remaining 90% is split approximately into one half for data<br />
losses as a result of software problems, such as program<br />
errors, system crashes and viruses, and one half for data<br />
losses as a result of reckless handling of the data by users and<br />
administrators.<br />
It is possible to eliminate part of the possible causes by means<br />
of preventive measures. The hardware-induced causes for data losses can be intercepted by excellent<br />
hardware equipment, as provided by <strong>Fujitsu</strong>. <strong>Fujitsu</strong> even offers a disaster-tolerant hardware platform for<br />
<strong>Exchange</strong> servers for natural disasters. A good virus scanner should by all means be used with every<br />
<strong>Exchange</strong> server in order to protect against data loss through viruses. Data loss as a result of reckless<br />
handling can be reduced in an <strong>Exchange</strong> server by training the administrators accordingly. Nevertheless,<br />
there is still great potential for possible data losses due to program errors, system crashes or human error. In<br />
this case, prevention through reliable backup hardware or careful backup strategy is the only cure.<br />
One action for avoiding data losses through program errors and system crashes is the transaction database<br />
concept used by <strong>Exchange</strong>, which was already explained in the chapter Transaction principle. In it all<br />
changes to the database are recorded in so-called log files. The log files, which are only written sequentially,<br />
are for the most part immune to logical errors due to program errors, as can occur in a database with<br />
complex data structures that is written and read on a permanently random basis. Furthermore, the data<br />
volume of the log files is quite small compared with the database so that errors in the log files are statistically<br />
speaking substantially less.<br />
However, this transaction principle necessitates a regular backup, as otherwise the updating of all changes<br />
to the database would in the long run take up a great deal of storage space. With an on-line <strong>Exchange</strong><br />
backup the log files are automatically deleted by <strong>Exchange</strong><br />
once the database backup has been completed. If the database<br />
is lost, the database can - with the aid of a backup and the log<br />
files that have been written since the last backup - be restored<br />
with all the data of the time of the database loss. After a<br />
database has been restored, <strong>Exchange</strong> automatically replays<br />
the log files with all the changes since the backup.<br />
Backup<br />
+ =<br />
Current<br />
database<br />
All versions of <strong>Exchange</strong> <strong>Server</strong> provide the option of carrying out a so-called on-line backup during ongoing<br />
operations. In this way, the <strong>Exchange</strong> database can be backed up while all the services of <strong>Exchange</strong> - apart<br />
from performance loss - are available without any restrictions. At the same time, it is of course possible to<br />
carry out a so-called off-line backup. However, this is not an adequate method, as the <strong>Exchange</strong> services are<br />
not available during the backup time, the data volume to be backed up is larger (because the database files<br />
are backed up as a whole and not logically), no data check takes place and the log files are not purged<br />
automatically. However, the essential disadvantage of an off-line backup lies in the fact that with a restore<br />
the log files which have come into being since the compilation of the backup cannot be played back.<br />
The choice of a suitable backup medium and suitable backup software has an altogether considerable<br />
influence on the availability of the <strong>Exchange</strong> server. Whereas the backup can be carried out during the<br />
ongoing operations of <strong>Exchange</strong> and the duration of a backup is thus not directly critical, the duration of a<br />
restore is particularly decisive for availability. Since, in contrast to the backup, the <strong>Exchange</strong> services are not<br />
unrestrictedly available during the restore. This is why when selecting a backup solution - hardware and<br />
software - particular attention must be paid to speed in addition to reliability.<br />
<strong>Exchange</strong> <strong>Server</strong> itself contains a number of features that accommodate a fast backup and restore. First with<br />
<strong>Exchange</strong> 2000 <strong>Server</strong>, <strong>Exchange</strong> supports several databases and storage groups. Storage groups can be<br />
backed up in parallel and databases can be restored individually. In the event of a restore only those users<br />
are affected whose data are in the database to be restored. All other users can use the <strong>Exchange</strong> services<br />
without any restrictions, apart from possible performance loss.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 28 (69)<br />
Logs
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
In connection with Windows <strong>Server</strong> <strong>2003</strong>, <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> has a further innovation to offer, the socalled<br />
»Volume Shadow Copy Service« (VSS) which substantially shortens the time for a backup.<br />
Essentially, the storage technology VSS is an innovation of Windows <strong>Server</strong> <strong>2003</strong>. New in <strong>Exchange</strong> <strong>Server</strong><br />
<strong>2003</strong> is the support for this function provided by Windows <strong>Server</strong> <strong>2003</strong>. This means that VSS is only<br />
available for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> when <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is used in combination with Windows<br />
<strong>Server</strong> <strong>2003</strong>.<br />
With VSS Microsoft provides a Windows-proprietary and uniform interface for shadow copies, frequently also<br />
referred to as snapshot backups. Snapshot backups are nothing new, a great many storage systems have<br />
been supporting such backups for a long time now and there are also various third-party software solutions<br />
in order to also implement snapshot backups from <strong>Exchange</strong> databases with the support of such storage<br />
systems. However, there is one interface that is supported by Microsoft, standardized and independent of the<br />
disk subsystem. The emphasis is deliberately placed on interface, or framework as Microsoft puts it.<br />
The manufacturers of intelligent storage systems and backup solutions now have to adapt their products to<br />
this framework. Even applications have to be adapted if they want to be VSS-compliant and to support<br />
snapshot backups. <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> is already one such VSS-compliant application.<br />
In the main, the VSS framework consists of three parts:<br />
The Requestor is the software that initiates a snapshot. It is typically backup software. As regards<br />
Microsoft„s own backup tool, »ntbackup.exe«, which is supplied with Windows <strong>Server</strong> <strong>2003</strong> or in an<br />
extended version with <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, it must be mentioned that it is not a VSS Requestor with<br />
which snapshots can be generated by <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. As far as <strong>Exchange</strong> is concerned, it<br />
merely controls classic online backups.<br />
• The Writer is a component that every VSScompliant<br />
application has to provide, with the<br />
Writer having to be adjusted to the<br />
application-specific data architecture. With<br />
<strong>Exchange</strong>, the Writer must ensure that a<br />
consistent database state is created in<br />
accordance with the transaction database<br />
principle which is based on <strong>Exchange</strong>, and<br />
that no changes are made to the data during<br />
the time of the actual snapshot. In addition,<br />
the Writer must also provide meta data for the<br />
data to be backed up. For example, in the<br />
case of <strong>Exchange</strong> a record of consistent data<br />
can extend over several volumes that must be<br />
backed up together.<br />
• The VSS Provider performs the actual<br />
snapshot. The Provider is usually supplied by<br />
the storage manufacturers whose storage<br />
systems offer internal mechanisms for cloning.<br />
Windows <strong>Server</strong> <strong>2003</strong> includes a softwarebased<br />
Provider that works according to the<br />
copy-on-write method.<br />
SQL-<strong>Server</strong><br />
VSS Writer<br />
<strong>Exchange</strong><br />
VSS Writer<br />
VSS Writer<br />
VSS<br />
Provider<br />
VSS<br />
Framework<br />
VSS<br />
Provider<br />
VSS<br />
VSS Requestor<br />
Requestor<br />
VSS<br />
Provider<br />
Several VSS Requestors and several VSS Providers<br />
can co-exist for various volumes.<br />
The advantage of the VSS framework consists of components of various software and hardware<br />
manufacturers harmonizing with each other. Particularly in somewhat larger data centers, in which various<br />
hardware and applications are used, the backup - regardless of the storage manufacturer - can e.g. now be<br />
coordinated with standardized software and special solutions are no longer needed to meet the requirements<br />
of e.g. database-based applications.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 29 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup hardware<br />
When choosing the backup medium, attention must be paid to the fact that the database is backed up in an<br />
adequate time. Since an on-line backup means performance losses for the users, it should be possible to<br />
carry out a backup during the typically low-use hours of the night. In addition to the backup, database<br />
maintenance, such as garbage collection and on-line defragmenting, also take place during this time. In this<br />
connection, garbage collection should run first, followed by<br />
on-line defragmenting and then the backup. As a result of<br />
garbage collection the data volume to be backed up<br />
becomes smaller and due to the defragmenting the<br />
subsequent database accesses during the backup take<br />
place more quickly. The data transfer rate and also the<br />
capacity of an individual backup medium play a role in the<br />
selection of the backup medium. It is indeed possible to<br />
carry out a backup on hard disk, Magneto Optical Disk<br />
(MO), CD-RW or DVD-RW. However, due to the data<br />
volume, tapes are the typical medium used.<br />
If there are no other requirements, such as existing backup structures, the backup medium should be chosen<br />
in such a way that the backup and restore are suitable both in an acceptable time and for a manageable<br />
amount of tapes. At any rate the backup device should be selected in such a way that the backup is carried<br />
out without any operative intervention, i.e. without an administrator having to change tapes during the<br />
backup or restore. For larger data volumes where one medium is not enough there are so-called tape<br />
libraries, which automatically change the tapes, as well as devices that with several drives write parallel onto<br />
several tapes, similar to the RAID 0 (striping) with hard disks, so as to increase data throughput. The table<br />
below shows a small selection of tape libraries.<br />
Library<br />
Technology<br />
Technology<br />
Maximum data<br />
throughput<br />
Capacity / tape<br />
without compr.<br />
[MB/s] [GB/h] [GB]<br />
DDS Gen5 3 10 36<br />
VXA-2 6 21 80<br />
VXA-320 12 42 160<br />
LTO2 24(35) 105(123) 200<br />
LTO3 80 281 400<br />
Number Maximum<br />
throughput<br />
Capacity / tape<br />
without compr.<br />
Drives Tapes [GB/h] [TB]<br />
VXA-2 PackerLoader VXA-2 1 10 21 0.8<br />
FibreCAT TX10 VXA-320 1 10 42 1.6<br />
FibreCAT TX24 LTO2, LTO3 1 - 2 12 / 24 123 - 281 2.4- 9.6<br />
MBK 9084 LTO2, LTO3 1 - 2 10 - 21 123 - 281 2.0 - 8.4<br />
Scalar i500 LTO2, LTO3 1 - 18 36 - 402 123 - 5058 7.2 - 160<br />
Scalar i2000 LTO2, LTO3 1 - 96 100 - 3492 123 - 26976 20 - 1396<br />
Scalar 10k LTO2, LTO3 1 - 324 700 - 9582 123 - 91044 789 - 3832<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 30 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup duration<br />
Calculation of the backup duration is not quite so trivial. In theory, it is the result of data volume divided by<br />
the data transfer rate. However, the maximum data transfer rate stipulated for tape technology cannot be<br />
taken as a basis. The effective data transfer rate is determined by other factors.<br />
To begin with the amount of data must be made available. In this respect, the performance of the disk subsystem,<br />
in which the database is filed, plays a role as well as CPU performance, main memory size and<br />
finally even <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> itself. For an on-line backup <strong>Exchange</strong> must provide all the data by<br />
means of the so-called backup API. In so doing, 64 kB blocks must be read, verified and transferred to the<br />
backup software. Microsoft quotes a throughput of approximately 70 GB/h for the <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
backup API.<br />
A further limitation to the data throughput follows from a technical feature of tape drives. Tapes are faster<br />
streaming devices than disks but they become dramatically slow when the data are not provided<br />
continuously and in sufficient quantities. If the data are provided more slowly or irregularly, the tape can no<br />
longer be described as being in the so-called streaming mode but switches to start-stop operations. In this<br />
mode the tape is stopped when no data are outstanding. When sufficient data are again available, the tape is<br />
restarted. For many recording procedures the tape even has to be rewound a little. This takes time and the<br />
writing speed decreases. How pronounced this effect is depends on the one hand on the recording<br />
technology used, on the cache abilities of the backup drive and also on the backup software used. The better<br />
the backup software is familiar with and designed for the features of the backup drive, the higher the effective<br />
data transfer rate.<br />
Compressing the data is another influence on the effective data transfer rate. All backup drives support data<br />
compression. This is not implemented by the backup software or the driver for the tape drive, but by the<br />
firmware of the tape drive itself. Depending on the compressing ability of the data, the write speed can<br />
increase. As a result the effective data throughput can even be above the maximum data throughput of the<br />
backup medium.<br />
The table opposite shows effective data throughput<br />
rates. In each case an on-line backup of a 50 GB<br />
<strong>Exchange</strong> database was carried out with a<br />
sufficiently fast disk subsystem with the Windows<br />
backup program.<br />
Technology<br />
Maximum<br />
data throughput<br />
Effective<br />
data throughput<br />
Total<br />
duration<br />
[MB/s] [GB/h] [MB/s] [GB/h] [h]<br />
DDS Gen5 3 10 4.8 16.8 3:10<br />
VXA-2 6 21 7.5 26.3 1:45<br />
VXA-320 12 42 15.0 52.7 1:00<br />
LTO2 30 105 47.0 165.2 0:18<br />
LTO3 80 281 105.0 369.1 0:08<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 31 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup solutions for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
As backup software either the Windows backup program or a 3 rd -party product that supports the <strong>Exchange</strong><br />
API, such as BrightStor ARCserve from Computer Associates or NetWorker from EMC Legato, can be used.<br />
Compared with the Windows backup program, third-party products provide additional functions, such as the<br />
support of tape libraries, backup of individual mailboxes or even individual mails or folders. However, when<br />
backing up individual<br />
mailboxes or folders note<br />
that the throughput is<br />
considerably smaller -<br />
approximately only 20% -<br />
in comparison with on-line<br />
backup of entire <strong>Exchange</strong><br />
databases.<br />
With Windows <strong>Server</strong><br />
<strong>2003</strong> and <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> VSS-based<br />
backup and recover<br />
should become a standard<br />
procedure for disaster<br />
recovery in the Enterprise<br />
Feature Windows Backup BrightStor ARCserve NetWorker<br />
Offline Backup × × ×<br />
Online Backup × × ×<br />
Single Database × × ×<br />
Single Mailbox × ×<br />
Single Objects × ×<br />
VSS Snapshots × ×<br />
Backup in parallel × × ×<br />
Online Restore × × ×<br />
Restore in parallel × × ×<br />
Cluster Support × × ×<br />
Tape Library Support × ×<br />
Remote Backup × × ×<br />
Environments Small Windows Heterogeneous<br />
environment. The advantages of this methodology, as discussed in the past, speak for themselves. A 3 rd -<br />
party backup product is also necessary here because Windows‟ own backup tool, ntbackup, does not<br />
support VSS snapshots of <strong>Exchange</strong> databases.<br />
The functional scope of the products available on the market varies substantially in part, particularly as far as<br />
the supported hardware devices or the support of other applications besides <strong>Exchange</strong>, or even other<br />
operating systems than Windows, are concerned. For <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> <strong>Fujitsu</strong> recommends the<br />
backup products of the NetWorker family. This backup solution is VSS-compliant and also enables individual<br />
mailboxes to be backed up and restored. In addition, the NetWorker supports the online backup of an<br />
incomparable amount of applications, all market-relevant backup devices and operating system platforms.<br />
Consequently, the creation of this backup solution is not an insular solution for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, but<br />
has laid the foundation for a company-wide Enterprise backup solution.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 32 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup strategy<br />
Even more fundamental than adequate backup hardware and software is an appropriate backup strategy.<br />
The backup strategy influences the requirements made of the backup hardware and software and in turn<br />
defines the restore strategy. Thus the backup intervals and backup<br />
method are critical for restore times. But also the structuring of the<br />
<strong>Exchange</strong> server into storage groups and databases influences the<br />
backup and restore times. Since restore time in particular is the<br />
critical path (as this means downtime), the restore time required<br />
particularly with larger <strong>Exchange</strong> servers determines both the<br />
backup and the <strong>Exchange</strong> storage group and database concept.<br />
<strong>Exchange</strong> 2000 <strong>Server</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> support up to<br />
four storage groups, each with up to five databases. Each storage group is administered within its own<br />
process. For each storage group a separate set of log files is administered within the group for all databases.<br />
One backup process is possible per storage group. In this way, providing there is appropriate backup<br />
hardware, it is possible for the backup to run in parallel. On the other hand, it should not be kept a secret that<br />
the splitting into several storage groups during normal operations means more expenditure as a result of the<br />
split into several processes and thus a higher CPU load and storage requirements.<br />
In order to completely back up the <strong>Exchange</strong> 2000 <strong>Server</strong> or <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> databases it is - in<br />
contrast to <strong>Exchange</strong> 5.5 - inadequate to only back up the <strong>Exchange</strong> databases. Although Active Directory is<br />
not a component of <strong>Exchange</strong> 200x and its backup, <strong>Exchange</strong> 200x is very much based upon it. The entire<br />
<strong>Exchange</strong> 200x configuration data are stored in Active Directory, as are the user data. Moreover, <strong>Exchange</strong><br />
200x is based on the IIS and various fundamental <strong>Exchange</strong> configuration data are stored in the Meta<br />
database of the IIS. Both sets of information, Active Directory and the IIS Meta database, are backed up in a<br />
system-state backup. Note in this respect that the Active Directory is not necessarily hosted on the <strong>Exchange</strong><br />
server. This is why a system-state backup of the domain controller must be effected. With clustered systems<br />
other components must also be taken into consideration in the backup.<br />
The backup hardware can either be directly connected to<br />
the <strong>Exchange</strong> server or to a dedicated backup server in a<br />
network. With an on-line backup access to the <strong>Exchange</strong><br />
data is effected in both cases through the backup interface<br />
of the respective <strong>Exchange</strong> server. If a decision is made in<br />
favor of a dedicated backup server, then a LAN of 1 GB is<br />
advisable in order to guarantee adequate data throughput.<br />
<strong>Exchange</strong><br />
<strong>Exchange</strong> <strong>Server</strong><br />
with Backup<br />
Backup Software<br />
Online Backup types<br />
<strong>Exchange</strong><br />
<strong>Exchange</strong> <strong>Server</strong><br />
Backup <strong>Server</strong><br />
Backup Agent<br />
Backup Software<br />
save purge<br />
database log files log files<br />
Full × × ×<br />
Incremental × ×<br />
Differential ×<br />
Copy × ×<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 33 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Restore<br />
Databases can be individually reconstructed. During this process the other databases are not affected and<br />
the users allocated to them can use all the <strong>Exchange</strong> services.<br />
Depending on the cause that triggers a database restore, loss of data is possible despite a diligent backup. A<br />
distinction is made between two recovery scenarios:<br />
Roll-Forward Recovery<br />
In the scenario of a roll-forward recovery one or more databases of a storage group are lost but the log<br />
files are intact. In this case, a selective restore of the databases concerned can be effected from a<br />
backup. <strong>Exchange</strong> restores all the data changed since the time of the snapshot on the basis of<br />
transaction logs. This means that no data are lost at all despite the necessity of accessing a backup.<br />
Point-in-Time Recovery<br />
If in addition to one or more databases the log files of a storage group are also affected, all the databases<br />
and the log files of the storage group must be copied back from the backup. As in this case the difference<br />
data, in the form of transaction logs, since the last backup are also lost, merely the database at the time<br />
of the backup can be restored.<br />
In such a disaster, which necessitates a point-in-time recover and in which data are lost, only as short as<br />
possible backup intervals help minimize data losses.<br />
The time required to restore a database is always greater than the time needed for the backup. On the one<br />
hand, this is hardware-induced because tapes are faster streaming devices than disks, particularly when the<br />
writing is done onto a RAID 5 array, and in this connection parity is also calculated and has to be written. On<br />
the other hand, it is software-induced because the restore process is more complex than the backup<br />
process. The restore process is made up of<br />
• Installation of the last full backup.<br />
• Installation of incremental or differential backups.<br />
• Replay of the changes saved in the log files since the last full backup.<br />
For the restore speed of the backup it can be assumed that typically<br />
60% - 70% of the backup speed is achieved. The time for the replay<br />
of the log files depends on the backup intervals and the<br />
performance of the <strong>Exchange</strong> server. The longer the time since the<br />
last backup, the more log information that has to be replayed. In this<br />
respect, the loading of the log files can really take longer than the<br />
replay of the backups itself (see box).<br />
Restore example<br />
Database size: 4 GB<br />
Log files of one week: 360 × 5 MB<br />
= 1.8 GB<br />
Restore time for the database: ½ hour<br />
Replay of the log files: 5 hours<br />
In order to increase the restore speed it is advisable to deactivate the virus scanner for the corresponding<br />
drive during the loading of the data. A virus check ought to be superfluous as the data were already checked<br />
prior to their entry in the database (see chapter Virus protection).<br />
Restore time is tantamount to downtime. Particularly with such an elementary medium as e-mail, certain<br />
requirements are made of availability. For example, if the requirement is made that an <strong>Exchange</strong> server may<br />
fail for a maximum of one hour, then it must be possible to implement the necessary restore of the database<br />
in precisely this time. Thus the maximum size of an <strong>Exchange</strong> database is determined indirectly. Therefore,<br />
the practical upper limit of an <strong>Exchange</strong> server is in the end not determined by the efficiency of the hardware,<br />
such as CPU, main memory and disk subsystem, but by the backup concept.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 34 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Best Practice<br />
The best backup method is a regular on-line full backup. In addition to the <strong>Exchange</strong> databases the full<br />
backup should include the system state of the <strong>Exchange</strong> server and of the domain controller as well as all<br />
the <strong>Exchange</strong> program files.<br />
In contrast to incremental and<br />
differential backup, a full backup<br />
minimizes the restore times. In<br />
order to minimize the times<br />
required for the loading of the log<br />
information it is advisable to<br />
implement a full backup as often<br />
as possible. With differential and<br />
incremental backups the fact that<br />
additional disk space is temporarily<br />
required for the log files during the<br />
restore must also be taken into<br />
consideration.<br />
Daily Full Backup<br />
Weekly Full Backup<br />
with Daily Differentials<br />
Weekly Full Backup<br />
with Daily Incrementals<br />
The backup hardware should be<br />
designed in such a way that a full<br />
1 Tape<br />
2 Tapes<br />
n Tapes<br />
backup can be carried out without manually changing the tape. Thus the basis for an automatic, userless,<br />
daily backup is given.<br />
For further information on backup strategies see <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Technical Documentation Library<br />
[L10] and <strong>Exchange</strong> 2000 <strong>Server</strong> Resource Kit [L12]<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 35 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Archiving<br />
Although <strong>Exchange</strong> could theoretically manage data storage of up to 320 TB, there are in practice limits of<br />
the magnitude of 400 GB. Therefore, limitation of the mailbox size in order to control the data volume on one<br />
<strong>Exchange</strong> server is to be found in almost all larger <strong>Exchange</strong> installations. And where can older mails be<br />
put? In most cases the answer to this question is left up to the mailbox user. The user decides whether the<br />
information that exceeds the limits of the mailbox meant for him is deleted or archived on a client-side basis.<br />
In this regard, however, neither data integrity nor data security is for the most part ensured. This cannot be<br />
the solution in fields of business in which the retention of all correspondence is required by law. <strong>Server</strong>-side<br />
solutions are required in this case.<br />
There are a series of high-performance 3 rd -party products for the automatic archiving of e-mails. The term<br />
archiving should not be confused with backup in this regard. A backup is used to save the data of current<br />
databases and for their recovery. An archive is used to retain the entire information history. Moreover, a<br />
distinction must be made with archiving between classic »long-term archiving« and »data relocation« to<br />
lower-cost storage media.<br />
Long-term archiving<br />
In order to meet the obligation to provide an audit trail for the legislator or auditing it is necessary for<br />
certain data stocks to be retained according to the period stipulated. Once successfully archived these<br />
data should no longer be changed, but must be made available at any time for evaluation if requested.<br />
Data relocation<br />
Data relocation is particularly suited for the displacement of so-called inactive data. This is the usual term<br />
for e-mails that are forgotten after fairly a long time. These e-mails are relocated to lower-cost media by<br />
means of migration according to set rules, such as ageing (date received, date of last access), size, and<br />
threshold values, such as high and low watermarks. In contrast to long-term archived e-mails, these emails<br />
remain visible in the <strong>Exchange</strong> database by means of so-called stub objects. User access to a<br />
relocated e-mail triggers restoring of the e-mails in the <strong>Exchange</strong> database both automatically and in a<br />
way that is transparent for the user.<br />
An archiving solution for <strong>Exchange</strong> <strong>Server</strong> can on the one hand be used to meet statutory regulations and on<br />
the other hand also increase performance and availability. Relieving the load of old e-mails on the <strong>Exchange</strong><br />
database results in a better throughput and in the event of a restore in shorter downtimes.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 36 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Virus protection<br />
Approximately just as many data losses that are due to hardware failures - 8% - are also due to the influence<br />
of computer viruses. These are only cases that entail data losses, they do not include failures due to the<br />
blocking of the mail servers with viruses and downtimes for the elimination of viruses. Therefore, it is vital to<br />
protect a mail system with a virus scanner so as to block any viruses before they spread or even do any<br />
damage. In this regard, it is necessary to not only check incoming mails for viruses. It is also necessary to<br />
check outgoing mails in order not to unintentionally distribute to business partners viruses that were<br />
introduced in other ways than with incoming mails. The damage in this case would above all be loss of<br />
image. It is also necessary to check internal mail traffic for viruses because the ways in which viruses can be<br />
introduced are varied, e.g. data media (such as floppies, CDs, removable disks or USB sticks), Internet and<br />
remote accesses or portable computers which are also operated in external networks.<br />
In addition to viruses that spread through e-mail, spam mail - unsolicited advertising - is also a nuisance<br />
nowadays and places a substantial load upon the mail servers. Statistics prove that between 5 and 40% of<br />
the volume of mail is caused by spam mail.<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> itself does not provide a virus scanner. Third-party solutions are required in this<br />
respect. As of version <strong>Exchange</strong> 5.5 Service Pack 3, however, <strong>Exchange</strong> has at least a virus scanner API,<br />
known in short as AV API. This interface permits virus scanners to check unencrypted<br />
e-mails for viruses in an efficient way and to eliminate them before they reach the recipient's mailbox. For<br />
encrypted e-mails, client-side anti-virus tools are needed.<br />
There is a whole range of anti-virus products. These products are generally not only restricted to the<br />
protection of <strong>Exchange</strong>, but mostly include a whole range of protective programs, with which client PCs, web<br />
servers, file servers and other services can be safeguarded against viruses. Although the <strong>Exchange</strong> virus<br />
API was already introduced with <strong>Exchange</strong> 5.5 SP3, not all anti-virus solutions support this interface. Even<br />
today some products are still restricted to the SMTP gateway and the client interface. When selecting an<br />
anti-virus solution, care should be taken that it is compatible with the virus scanner API of <strong>Exchange</strong> <strong>Server</strong><br />
<strong>2003</strong>. Effective protection and optimal performance are only guaranteed in this way.<br />
An overview of existing anti-virus solutions for <strong>Exchange</strong> <strong>Server</strong> and an independent performance<br />
assessment are provided by the web site www.av-test.org, a project of the University of Magdeburg and AV-<br />
Test GmbH.<br />
For its work a virus scanner uses considerable system resources. Processor performance above all is<br />
required, particularly for compressed attachments because in these cases the contents to be checked must<br />
first be unpacked. Whereas a virus scanner in fact does not constitute an appreciable load for the main<br />
memory and the disk I/O or the network I/O.<br />
Measurements with the medium profile<br />
on a PRIMERGY TX150 with TrendMicro<br />
ScanMail 6.2 have shown that with a<br />
virus scanner the CPU requirement of<br />
<strong>Exchange</strong> increases by approximately<br />
factor 1.6. The changes to the response<br />
times are on the other hand almost<br />
constant and are approximately 25 ms.<br />
However, for the processors of the<br />
PRIMERGY it is not a problem to provide<br />
the necessary computing performance.<br />
During sizing attention merely has to be<br />
paid to planning for this CPU requirement<br />
and to accordingly selecting a<br />
correspondingly high-capacity CPU or<br />
the number of processors.<br />
Nowadays data security plays an increasingly important role and many people decide in favor of encrypting<br />
their mails. However, in this respect the fact that such mails cannot be checked for viruses on the <strong>Exchange</strong><br />
server must be taken into consideration. In this case classic virus scanners must be used on the client<br />
systems.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 37 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
System analysis tools<br />
A business-critical application, such as e-mail communication, requires foresighted planning and continuous<br />
performance checks during ongoing operations. For <strong>Exchange</strong> <strong>Server</strong>, Microsoft provides a variety of tools<br />
that enable the efficiency of an <strong>Exchange</strong> <strong>Server</strong> to be planned, verified and analyzed. The tools can be<br />
distinguished for three phases:<br />
Planning and Design<br />
White Paper<br />
A variety of documents exists for the planning of an <strong>Exchange</strong> <strong>Server</strong> environment and the sizing of<br />
the individual <strong>Exchange</strong> <strong>Server</strong>s. In addition to this white paper, which deals specifically with the sizing<br />
of PRIMERGY servers, there are numerous Microsoft documents, see <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Technical Documentation Library [L10]. The white paper <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Performance and<br />
Scalability <strong>Guide</strong> [L11], which contains essential information about the performance and scaling of<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>, is particularly worth mentioning here.<br />
System Center Capacity Planner 2006<br />
Microsoft also provides the efficient product System Center Capacity Planner 2006 [L14], which<br />
enables the interactive planning and modeling of <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> topologies and Operations<br />
Manager 2005 environments.<br />
Prototyping (design verification)<br />
There are two tools from Microsoft used to evaluate the performance of <strong>Exchange</strong> <strong>Server</strong> and to verify<br />
whether the planned server hardware has been adequately sized. Both tools are not suited for ongoing<br />
operations and should only be used on non-productive systems.<br />
JetStress<br />
The tool JetStress is used to check the disk I/O performance of an <strong>Exchange</strong> <strong>Server</strong>. JetStress<br />
simulates the disk I/O load of <strong>Exchange</strong> for a definable number of users with regard to the <strong>Exchange</strong><br />
databases and their log files. It is not absolutely necessary for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> to be installed<br />
for this purpose. However, CPU, memory and network I/O are not simulated by JetStress.<br />
LoadSim<br />
The simulation tool LoadSim has already been presented in detail in chapter <strong>Exchange</strong> Measurement<br />
Methodology. It simulates the activity of <strong>Exchange</strong> users and thus puts the <strong>Exchange</strong> <strong>Server</strong> under<br />
realistic load, with all the resources (CPU, memory, disk subsystem, network) the <strong>Exchange</strong> <strong>Server</strong><br />
needs being involved.<br />
Both stress tools can be downloaded free of charge from the web site Downloads for <strong>Exchange</strong> <strong>Server</strong><br />
<strong>2003</strong> [L9].<br />
Operate<br />
Windows contains a central concept to monitor and analyze the performance of a system at run time.<br />
Events and performance counters are collected and archived on a system-wide basis for this purpose.<br />
This standardized concept is also open to all applications, providing they make use of it. Microsoft<br />
<strong>Exchange</strong> <strong>Server</strong> uses it intensively and not only records events available in the event log but also a<br />
variety of <strong>Exchange</strong>-specific performance counters. To evaluate the events and performance counters<br />
you can either use Event Viewer and Performance Monitor, available as standard in every Windows<br />
system, or also use special tools that evaluate and asses the contents of the event log and the<br />
performance counters under certain application aspects.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 38 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Event Viewer<br />
The Event Viewer is a standard tool in every Windows system and can be found under the name<br />
»Event Viewer« in the start menu under »Administrative Tools«. The events are sorted into various<br />
groups, such as »Application«, »Security«, »System« or »DNS <strong>Server</strong>« and divided in each case into<br />
the classes »Information«, »Warning« and »Error«. The events logged by <strong>Exchange</strong> <strong>Server</strong> are to be<br />
found under the category »Application«. However, events that influence the availability of an<br />
<strong>Exchange</strong> <strong>Server</strong> can also appear under »System«.<br />
Performance Monitor<br />
The Performance Monitor is an integral part of every Windows system and can be found under the<br />
name »Performance« in the start menu under »Administrative Tools«. It can also be selected using<br />
the short command »perfmon«.<br />
A description of the most important performance counters relevant to the performance of an <strong>Exchange</strong><br />
<strong>Server</strong> can be found in the following chapter Performance Analysis.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> Best Practices Analyzer Tool<br />
The tool Microsoft <strong>Exchange</strong> <strong>Server</strong> Best Practices Analyzer [L9], also known in short as ExBPA,<br />
determines the »state of health« of an entire <strong>Exchange</strong> <strong>Server</strong> topology. For this purpose, the<br />
<strong>Exchange</strong> <strong>Server</strong> Best Practices Analyzer automatically collects the settings and data from the<br />
relevant components, such as Active Directory, Registry, Metabase and performance counters. These<br />
data are compared using comprehensive best-practice rules and a detailed report is consequently<br />
prepared with recommendations for the optimization of the <strong>Exchange</strong> environment.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> Performance Troubleshooting Analyzer Tool<br />
The Microsoft <strong>Exchange</strong> <strong>Server</strong> Performance Troubleshooting Analyzer Tool [L9] collects the<br />
configuration data and performance counters of an <strong>Exchange</strong> <strong>Server</strong> at run time. The tool analyzes<br />
the data and provides information about the possible causes of bottlenecks.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> Profile Analyzer<br />
The <strong>Exchange</strong> <strong>Server</strong> Profile Analyzer [L9] can be of help for future capacity planning and<br />
performance analyses. With the aid of this tool it is possible to collect statistical information about the<br />
activities of individual mailboxes as well as entire <strong>Exchange</strong> <strong>Server</strong>s.<br />
Microsoft <strong>Exchange</strong> <strong>Server</strong> User Monitor<br />
Unlike the above listed tools, the Microsoft <strong>Exchange</strong> <strong>Server</strong> User Monitor [L9] does not work on the<br />
server side, but on the client side. As a result it is possible to analyze the impression of an individual<br />
user with regard to the performance of <strong>Exchange</strong> <strong>Server</strong>. The <strong>Exchange</strong> <strong>Server</strong> User Monitor collects<br />
data such as processor usage, response times of the network and response times of the Outlook <strong>2003</strong><br />
MAPI interface. These data can then be used for bottleneck analyses and for the planning of future<br />
infrastructures.<br />
Microsoft Operations Manager<br />
In Microsoft Operations Manager (MOM) [L15] Microsoft makes a powerful software product available,<br />
with which events and the system performance of various server groups can be monitored within the<br />
company network. MOM creates reports, trend analyses and offers proactive notifications in the event<br />
of alerts and errors on the basis of freely configurable filters and rules. These rules can be extended<br />
by additional management packs, which are available for various applications. Such an <strong>Exchange</strong>specific<br />
management pack [L9] is also available for Microsoft <strong>Exchange</strong> <strong>Server</strong>.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 39 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Performance analysis<br />
Windows and applications such as <strong>Exchange</strong> <strong>Server</strong> provide performance counters for all relevant<br />
components. These performance counters can be viewed, monitored and recorded through a standardized<br />
interface with the performance monitor that is available in all Windows versions – also known as system<br />
monitor in some Windows versions.<br />
The performance monitor can be found under the name<br />
»Performance« in the start menu under »Administrative Tools«. It<br />
can also be started using the short command »perfmon«.<br />
Performance counters are grouped in an object-specific manner,<br />
some of them also exist in various instances when an object is<br />
available several times. For example, there is a performance<br />
counter, »%Processor Time«, for the object »Processor«, with<br />
one instance per CPU for a multi-processor system. Not all<br />
performance counters are already available in Windows, and<br />
many applications, such as <strong>Exchange</strong> <strong>Server</strong>, come with their<br />
own performance counters, which integrate in the operating<br />
system and can be queried through the performance monitor. The<br />
performance data can either be monitored on screen or, better, written in a file and analyzed offline. Not only<br />
can performance counters of the local system be evaluated but also of remote servers, which necessitate<br />
appropriate access rights. How to use the performance monitor is described in detail in Windows help or in<br />
Microsoft articles in the Internet, and there is also an explanation of each individual performance counter<br />
under »Explain«.<br />
Please note that the performance monitor is also a Windows application that needs computing time. It is<br />
possible with an extreme server overload for the performance monitor itself not to be able to determine and<br />
show any performance data; in this case the relevant values are then 0 or blank.<br />
To obtain an overview of the efficiency of an <strong>Exchange</strong> Mailbox <strong>Server</strong> it is sufficient to observe a number of<br />
performance counters from the categories:<br />
Processor<br />
Memory<br />
Logical Disk<br />
MS<strong>Exchange</strong>IS<br />
SMTP <strong>Server</strong><br />
In detail there are the following performance counters:<br />
\\\Processor(_Total)\% Processor Time<br />
\\\System\Processor Queue Length<br />
\\\Memory\Available MBytes<br />
\\\Memory\Free System Page Table Entries<br />
\\\Memory\Pages/sec<br />
\\\LogicalDisk(:)\Avg. Disk Queue Length<br />
\\\LogicalDisk(:)\Avg. Disk sec/Read<br />
\\\LogicalDisk(:)\Avg. Disk sec/Write<br />
\\\LogicalDisk(:)\Disk Reads/sec<br />
\\\LogicalDisk(:)\Disk Writes//sec<br />
\\\MS<strong>Exchange</strong>IS Mailbox(_Total)\Send Queue Size<br />
\\\MS<strong>Exchange</strong>IS\RPC Averaged Latency<br />
\\\MS<strong>Exchange</strong>IS\RPC Requests<br />
\\\MS<strong>Exchange</strong>IS\VM Total Large Free Block Bytes<br />
\\\SMTP <strong>Server</strong>(_Total)\Local Queue Length<br />
\\\SMTP <strong>Server</strong>(_Total)\Remote Queue Length<br />
Depending on the configuration, the <strong>Exchange</strong> <strong>Server</strong> which should be monitored has<br />
to be selected (it is also possible for several <strong>Exchange</strong> <strong>Server</strong>s to be monitored at the same time). With the<br />
logical disk counters you also need to select the logical drive(s) : relevant to the <strong>Exchange</strong><br />
databases and log files.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 40 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
On their own, the figures and graphs provided by the performance monitor of course say nothing about the<br />
efficiency and health of an <strong>Exchange</strong> <strong>Server</strong>. For this purpose a number of rules and thresholds are<br />
necessary, which the administrator should be aware of for each of these performance counters.<br />
Processor<br />
Processor(_Total)\% Processor Time<br />
To also be able to manage load peaks, the average CPU usage should not be greater than 80% for<br />
a longer period of time.<br />
System\Processor Queue Length<br />
Adequate processor performance exists when the counter Processor Queue Length has an<br />
average value that is smaller than the number of logical processors.<br />
If a bottleneck is evident here, solution options exist: Increase the processor performance through<br />
additional or faster processors, or relocate services or mailboxes to other <strong>Exchange</strong> <strong>Server</strong>s.<br />
Memory<br />
Memory\Available MBytes<br />
The free memory still available should always be greater than 50 MB, and by all means greater<br />
than 4 MB, as Windows otherwise implements drastic reductions in the resident working sets of the<br />
processes.<br />
If a bottleneck is evident here, the upgrading of the main memory should be taken into<br />
consideration (see also chapter Main memory).<br />
Memory\Free System Page Table Entries<br />
To ensure that the operating system remains runnable, the Free System Page Table Entries should<br />
not go below 3,500.<br />
A check should be made here as to whether the boot.ini-switch /USERVA=3030 has also been set<br />
with the set boot.ini-switch /3GB (see also chapter Main memory).<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 41 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Logical Disk<br />
LogicalDisk(:)\Avg. Disk Queue Length<br />
The average queue length of a logical drive should not exceed the number of hard disks from<br />
which the logical drive was formed. Longer disk queues occur together with higher disk response<br />
times and are an indication of an overloaded disk subsystem.<br />
LogicalDisk(:)\Avg. Disk sec/Read<br />
LogicalDisk(:)\Avg. Disk sec/Write<br />
The read and write response times should be clearly below 20 ms, ideally around 5 ms for read<br />
and around 10 ms for write. Higher response times occur together with longer disk queues and are<br />
an indication of an overloaded disk subsystem.<br />
A remedy can be found through the addition of further hard disks, and the use of faster hard disks<br />
or a more efficient disk subsystem. The activation of the hard-disk read and write caches or the<br />
activation or increase in the cache of the disk subsystem or of the RAID controller can contribute<br />
toward reducing the response times and thus also toward reducing the disk queue length. Another<br />
possibility of relieving the load on the disk subsystem is by enlarging the <strong>Exchange</strong> cache through<br />
upgrading the server with additional main memory, through which the necessity to access the<br />
databases is reduced.<br />
LogicalDisk(:)\Disk Reads/sec<br />
LogicalDisk(:)\Disk Writes//sec<br />
The number of I/Os that a logical drive can manage per second depends on the RAID level used<br />
and on the number of hard disks. The two performance counters show the number of read and<br />
write requests that are generated on the server side. Depending on the RAID level, the outcome for<br />
the hard disks is a different number of I/O requests, which is calculated for a RAID 10 according to<br />
the formula<br />
IO10 = IOread + 2 × IOwrite<br />
and for a RAID 5 according to the formula<br />
IO5 = IOread + 4 × IOwrite.<br />
It must also be taken into consideration that a hard disk – depending on its<br />
speed – can only satisfy a certain number of IO/s. Hence the result, for<br />
example for a RAID 10 consisting of four hard disks with 10 krpm is a<br />
maximum rate of 480 IO/s.<br />
If the I/O rate observed here is too high, there are various options of finding a remedy. It is possible<br />
on the one hand to increase the number of hard disks used. If a RAID 5 is used, you can consider<br />
using a RAID 10 instead, which has a better I/O rate with the same number of hard disks (see also<br />
chapter Hard disks). If the I/O bottleneck affects a logical drive on which an <strong>Exchange</strong> database<br />
lies, upgrading the main memory is another possible solution. <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> can use up to<br />
1.2 GB RAM as the database cache. The effect of increasing the database cache is of course a<br />
lower disk I/O rate. The default size of the <strong>Exchange</strong> cache is about 500 MB and with a set switch<br />
/3GB about 900 MB. If sufficient memory is available, a further 300 MB can be added. For this<br />
purpose, the value for msESEParamMaxCacheSize must set to 307200 using the Active Directory<br />
Service interface and the ADSI-Edit tool.<br />
More information about hard disks, controllers and RAID levels is to be found in the white paper<br />
Performance Report – Modular RAID [L5].<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 42 (69)<br />
IO/s<br />
5400 rpm 62<br />
7200 rpm 75<br />
10000 rpm 120<br />
15000 rpm 170
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
<strong>Exchange</strong> <strong>Server</strong><br />
MS<strong>Exchange</strong>IS Mailbox(_Total)\Send Queue Size<br />
The send queue contains the <strong>Exchange</strong> objects that are waiting to be forwarded. The destination<br />
can be either a local database or another mail server. This queue should be smaller than 500. A<br />
higher value is an indication of an overloaded system.<br />
SMTP <strong>Server</strong>(_Total)\Local Queue Length<br />
The local queue of the SMTP server contains the <strong>Exchange</strong> objects that have to be incorporated in<br />
the local databases. It should not be larger than 100. A larger queue, in connection with longer disk<br />
queues and higher disk response times, is an indication of an overloaded disk subsystem.<br />
SMTP <strong>Server</strong>(_Total)\Remote Queue Length<br />
The remote queue of the SMTP server contains the <strong>Exchange</strong> objects that have to be processed<br />
by remote mail servers. It should always be smaller than 1000. A larger queue is an indication of<br />
connection problems or problems with the remote mail servers.<br />
MS<strong>Exchange</strong>IS\RPC Averaged Latency<br />
This counter contains the average waiting time of outstanding requests. The value in milliseconds<br />
should be less than 50. A higher value is an indication of an overloaded system.<br />
MS<strong>Exchange</strong>IS\RPC Requests<br />
This counter contains the number of outstanding requests. The value should be less than 30. A<br />
higher value is an indication of an overloaded system.<br />
MS<strong>Exchange</strong>IS\VM Total Large Free Block Size (MB)<br />
This counter contains the size of the largest free virtual memory block. The value is a measure for<br />
the fragmenting of the virtual address space. It should be more than 500 MB. For more information<br />
about this see the Microsoft TechNet article KB325044 [L19].<br />
As already mentioned at the start of the chapter, the performance counters described only represent a small<br />
selection of the performance counters that are available and relevant for <strong>Exchange</strong>. Further reaching<br />
bottleneck analyses will undoubtedly call for additional performance counters. The description of all<br />
<strong>Exchange</strong>-relevant counters would go beyond the scope of this paper, therefore reference is made to the<br />
appropriate documentation from Microsoft [L10], in particular the white paper Troubleshooting <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> Performance [L12] is worth mentioning here.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 43 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY as <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
Which PRIMERGY models are suitable for the <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>? To begin with every PRIMERGY<br />
model has sufficient performance and provides adequate memory configuration for a certain number of<br />
<strong>Exchange</strong> users. Since the previous sections taught us that the <strong>Exchange</strong> <strong>Server</strong> is a very disk-I/O-intensive<br />
application, the hard disk configuration of the PRIMERGY plays a significant role in the suitability for<br />
<strong>Exchange</strong> or in the maximum number of users.<br />
With regard to the disk-I/O the internal configuration options of the servers are mostly inadequate for<br />
<strong>Exchange</strong>. Therefore it makes sense to extend the server by an external disk subsystem. Direct Attached<br />
Storages (see chapter Disk Subsystem) or a Storage Area Network (SAN) are unrestrictedly suitable for<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong>. There are various technologies for the connection of such disk subsystems: SCSI for<br />
the direct connection and Fibre-Channel (FC) or Internet-SCSI (iSCSI) in the SAN environment. A further<br />
connection option is provided by Network Attached Storage (NAS) in conjunction with Windows Storage<br />
<strong>Server</strong> <strong>2003</strong>.<br />
Below the current PRIMERGY systems and their performance classes are explained with regard to<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> and configuration options for performance classes of between 75 and 5,000 users<br />
are presented. Configurations with more than 5,000 users are possible in cluster solutions with PRIMERGY<br />
systems, but are not described here.<br />
The configurations described here have all been tested for their functionality in the PRIMERGY Performance<br />
Lab and subjected to the medium load profile described in the chapter <strong>Exchange</strong> Measurement<br />
Methodology.<br />
When dimensioning all configurations the following assumptions were taken as a basis:<br />
Disk capacity<br />
• For the operating system and program files we estimate 10 GB.<br />
• For the Active Directory we estimate 2 GB, this is adequate for approximately 300 000 entries.<br />
• We calculate the <strong>Exchange</strong> databases on the basis of an average mailbox size of 100 MB per user.<br />
Since mails are not deleted directly from a database, but only after a fixed latency period (default 30<br />
days), we calculate for this an additional disk capacity of 100%.<br />
• For the disk requirements of the log files we assume an average mail volume of 3 MB per user and day.<br />
The disk area for the log files must be sufficient to store all the data that occurs until the next online<br />
backup. A daily backup is advisable. For security reasons we plan log-file space that will be adequate<br />
for seven days.<br />
• Based on the same mail volume of 3 MB per user and day we plan space for one day for SMTP and<br />
MTA queues.<br />
• While the restore of a database is effected directly onto the volume of the database, extra disk space<br />
must be planned for the restore of log files, the size of which is determined by the sum of the log files<br />
that occur as the differential between two full backups. For this we calculate the same disk capacity as<br />
for the log files of a storage group.<br />
Processor performance and main memory<br />
• The processor performance was dimensioned in such a way that for a load simulation (without a virus<br />
scanner, spam filter and backup) the processor load did not lie above 30%. Thus there was sufficient<br />
space for other services, such as a virus scanner or backup.<br />
• Since most main memory is used as a cache for the <strong>Exchange</strong> databases, it was dimensioned in<br />
dependence of the disk subsystem in such a way that on the one hand the disk queue length is not<br />
greater than the number of hard disks used for the databases and that in addition the response time for<br />
95% of all transactions is not more than 500 ms.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 44 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Backup<br />
Since regular data backups are vital for stable <strong>Exchange</strong> operations, we have taken the following into<br />
consideration for the sample configurations:<br />
• It must be possible to carry out the entire backup at maximum database size in a maximum of seven<br />
hours.<br />
• The restore of an individual database may not take more than four hours. This requirement also<br />
influences the <strong>Exchange</strong> design and the structuring in storage groups and databases.<br />
The sample configurations presented here are pure <strong>Exchange</strong> mailbox servers, so-called back-end servers.<br />
Particularly with larger <strong>Exchange</strong> installations there is a need for further servers for Active Directory, DNS<br />
and, if necessary, other services, such as Outlook Web Access (OWA), SharePoint Portal <strong>Server</strong> or others.<br />
At any rate, the sizing of an <strong>Exchange</strong> server requires the analysis of the customer requirements and of the<br />
existing infrastructure as well as expert consulting.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 45 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY Econel 100<br />
The mono processor system<br />
PRIMERGY Econel 100 offers<br />
configuration options with up to<br />
8 GB of main memory and four<br />
internal SATA hard disks. The<br />
optional 1-channel SCSI<br />
controller is intended to control<br />
the backup media.<br />
As the entry-level model of the<br />
Disks internal max. 4, 80 – 500 GB, 7200 rpm<br />
PRIMERGY server family, the<br />
Disks external none<br />
PRIMERGY Econel 100 is<br />
suited for small companies in<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
the small business segment,<br />
where it provides data security for smalller budgets. It should be noted that if deployed in such a scenario the<br />
Active Directory must in the majority of cases also be accommodated in the system. However, in the small<br />
business environment the Active Directory is not critical as regards hard disk access, since there should not<br />
be many changes and read access is temporarily buffered by <strong>Exchange</strong>. The only prerequisite here should<br />
be somewhat more main memory. If used in the branch offices of larger companies the data volume arising<br />
as a result of replication of the Active Directory depends on the design of the Active Directory. This not only<br />
influences the necessary network bandwidth between branch office and head office, but also the hardware of<br />
the Active Directory server in the branch office.<br />
If a Pentium 4 631, memory of 1 GB and four 80 GB hard disks are used, the configuration depicted below<br />
could in connection with Microsoft Small Business <strong>Server</strong> <strong>2003</strong> [L16] be an entry-level solution for up to 75<br />
users. Since a regular data backup is essential for the smooth operation of an <strong>Exchange</strong> <strong>Server</strong> (see chapter<br />
Backup), a DDS Gen5 or VXA-2 tape drive is recommended.<br />
In connection with the standard products of Windows <strong>Server</strong> <strong>2003</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> instead of<br />
Small Business <strong>Server</strong> <strong>2003</strong> there is no limitation to 75 users, and a PRIMERGY Econel 100 with the four<br />
internal SATA hard disks, a Pentium D 820 and main memory of 2 GB can manage up to 200 <strong>Exchange</strong><br />
users. In this larger configuration a VXA-2 drive should be used for the backup, because with DDS Gen5 the<br />
tape capacity may not be adequate to perform a complete backup without having to change the tape.<br />
Although hard disks with a capacity up to 500 GB are available for the PRIMERGY Econel 100, you should<br />
not come up with the idea of replacing the four small-capacity hard disks with two large-capacity hard disks.<br />
Four hard disks are consciously used here to be able to archive the <strong>Exchange</strong> databases and log files onto<br />
different physical hard disks. This is for performance and security reasons, see chapter Disk subsystem.<br />
The picture opposite shows a small configuration for<br />
an <strong>Exchange</strong> server with Active Directory. The four<br />
disks are connected directly to the internal onboard<br />
SATA controller. The RAID 1 functionality can be<br />
realized either with the disk mirroring of Windows<br />
<strong>Server</strong> <strong>2003</strong> or with the onboard 4-port SATA<br />
controller with HW-RAID. Provided the system is<br />
secured by UPS, the disk caches should be activated<br />
to improve performance.<br />
Processors 1 × Pentium D 820, 2.8 GHz, 2×1 MB SLC<br />
or<br />
1 × Pentium 4 631/641, 3.0/3.2 GHz, 2 MB<br />
SLC or<br />
1 × Celeron D 346, 3.06 GHz, 256 kB SLC<br />
Memory max. 8 GB<br />
Onboard RAID SATA<br />
PCI SCSI controller 1 × 1-channel<br />
SCSI channels 1 (backup device)<br />
RAID 1<br />
OS, AD,<br />
Logs, Queues<br />
RAID 1<br />
Store<br />
Two mirrored system drives are set up. One partition is created for each system drive. The first system drive<br />
is used for the operating system, Active Directory, log files and queues, the second one is solely for the<br />
<strong>Exchange</strong> databases (store).<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 46 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> in its standard edition supports two databases with a maximum database size of<br />
75 GB, with one database being required for the mailboxes and one database for the public folders. Thus<br />
this configuration meets the assumptions specified at the start of the calculation for an average mailbox size<br />
of 100 MB per user and 100% reserve for database objects that are not to be deleted immediately, but only<br />
after a standard latency of 30 days. Under these conditions the database for the mailboxes would in the<br />
worst case grow to up to 100 MB × 200 users × 2 = 40 GB.<br />
In accordance with the assumptions made initially of a log file volume of 3 MB per user and day, a disk space<br />
requirement of 4 GB must be calculated for a 7-day stock of log files for 200 users.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 47 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY Econel 200<br />
The dual processor system PRIMERGY<br />
Econel 200 offers the computing<br />
performance of two Intel Xeon DP<br />
processors and configuration options with<br />
up to 4 GB of main memory as well as four<br />
internal SATA hard disks. The optional 1channel<br />
SCSI controller is intended to<br />
control the backup drives.<br />
Processors 2 × Xeon DP<br />
2.8/3.4 GHz, 2 MB SLC<br />
Memory max. 4 GB<br />
Onboard RAID SATA<br />
PCI SCSI controller 1 × 1-channel<br />
SCSI channels 1 (backup device)<br />
Disks internal max. 4<br />
80 – 500 GB, 7200 rpm<br />
Disks external none<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
As the entry-level model PRIMERGY<br />
Econel 100, the PRIMERGY Econel 200 is<br />
also ideally suited for the small business segment or for branch offices where more computing performance<br />
is needed than a mono processor system can provide.<br />
It should be noted that if deployed in such a scenario the Active Directory must in the majority of cases also<br />
be accommodated in the system. However, in the small business environment the Active Directory is not<br />
critical as regards hard disk access, since there should not be many changes and read access is temporarily<br />
buffered by <strong>Exchange</strong>. The only prerequisite here should be somewhat more main memory. If used in the<br />
branch offices of larger companies the data volume arising as a result of replication of the Active Directory<br />
depends on the design of the Active Directory. This not only influences the necessary network bandwidth<br />
between branch office and head office, but also the hardware of the Active Directory server in the branch<br />
office.<br />
The picture opposite shows a small configuration for<br />
an <strong>Exchange</strong> server with Active Directory. The four<br />
disks are connected directly to the internal onboard<br />
SATA controller. The RAID 1 functionality can be<br />
realized either with the disk mirroring of Windows<br />
<strong>Server</strong> <strong>2003</strong> or with the onboard 4-port SATA<br />
controller with HW-RAID. Provided the system is<br />
secured by UPS, the disk caches should be activated<br />
to improve performance. Although hard disks with a<br />
RAID<br />
RAID<br />
1<br />
OS,<br />
OS,<br />
AD,<br />
AD,<br />
Logs,<br />
Logs,<br />
Queues<br />
Queues<br />
RAID<br />
RAID<br />
1<br />
Store<br />
Store<br />
capacity up to 500 GB are available for the PRIMERGY Econel 200, you should not come up with the idea of<br />
replacing the four small-capacity hard disks with two large-capacity hard disks. Four hard disks are<br />
consciously used here to be able to archive the <strong>Exchange</strong> databases and log files onto different physical<br />
hard disks. This is for performance and security reasons, see chapter Disk subsystem.<br />
Two mirrored system drives are set up. One partition is created for each system drive. The first system drive<br />
is used for the operating system, Active Directory, log files and queues, the second one is solely for the<br />
<strong>Exchange</strong> databases (store).<br />
If you equip the PRIMERGY Econel 200 with one or two Xeon DP processors and 1 GB of memory, this<br />
configuration could in connection with Microsoft Small Business <strong>Server</strong> <strong>2003</strong> [L16] be an entry-level<br />
configuration for up to 75 users, which can be loaded with other CPU- or memory-intensive tasks. Since a<br />
regular data backup is essential for the smooth operation of an <strong>Exchange</strong> <strong>Server</strong> (see chapter Backup), a<br />
VXA-2 or VXA-320 tape drive is recommended.<br />
If you use the standard products of Windows <strong>Server</strong> <strong>2003</strong> and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> instead of the<br />
Microsoft Small Business <strong>Server</strong> Edition that is limited to 75 users, an appropriately configured PRIMERGY<br />
Econel 200 with two processors and 2 GB of main memory can by all means manage up to 200 users, with<br />
the disk subsystem of at most four internal SATA hard disks proving to be the limiting factor. Alternatively, a<br />
Network Attached Storage (NAS) on the basis of the Windows Storage <strong>Server</strong> <strong>2003</strong> can also be used as disk<br />
subsystem. Thus as regards computing performance the PRIMERGY Econel 200 would also suit as a costeffective<br />
solution for dedicated <strong>Exchange</strong> <strong>Server</strong>s for branch offices or Application Service Provider (ASP)<br />
data center. However, as it is not possible to integrate the PRIMERGY Econel 200 in a 19" rack, it is for this<br />
field of application less suited and in this case the rack servers of the PRIMERGY product line are<br />
recommended.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 48 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY TX150 S4<br />
The mono processor system PRIMERGY<br />
TX150 S4 offers configuration options<br />
with up to 8 GB RAM and four internal<br />
SAS or SATA hard disks. Optionally,<br />
further 28 hard disks are possible with<br />
two externally connected PRIMERGY<br />
SX30. In addition to the classic floorstand<br />
housing form the PRIMERGY<br />
TX150 S4 is also available in a rack<br />
version.<br />
As a floor-stand server the PRIMERGY<br />
TX150 S4 is suited for small companies<br />
in the small business segment or as a<br />
server for small branch offices. It should<br />
be noted that if deployed in such a<br />
scenario the Active Directory must in the<br />
majority of cases also be accommodated<br />
in the system. However, in the small<br />
business environment the Active<br />
Directory is not critical as regards hard<br />
disk access, since there should not be<br />
many changes and read access is<br />
temporarily buffered by <strong>Exchange</strong>. The<br />
only prerequisite here should be<br />
Processors 1 × Pentium D 820 / 930 / 940 / 950<br />
(2.8/3.0/3.2/3.4 GHz, 2 MB SLC) or<br />
1 × Pentium 4 631/651 (3.0/3.4 GHz,2 MB SLC) or<br />
1 × Celeron D 346 (3.06 GHz, 256 KB SLC)<br />
Memory max. 8 GB<br />
Onboard RAID SCSI or SATA<br />
PCI SCSI controller 1 × 1-channel (backup device)<br />
1 × 2-channel RAID<br />
Disks internal max. 4<br />
Disks external max. 28<br />
Onboard LAN 1 × 10/100/1000 Mbit<br />
somewhat more main memory. If used in the branch offices of larger companies the data volume arising as a<br />
result of replication of the Active Directory depends on the design of the Active Directory. This not only<br />
influences the necessary network bandwidth between branch office and head office, but also the hardware of<br />
the Active Directory server in the branch office.<br />
If a Pentium 4 631 and a memory of 1 GB are used, the PRIMERGY TX150 S4 with four internal 80-GB<br />
SATA or four 73-GB SCSI hard disks can in connection with Microsoft Small Business <strong>Server</strong> <strong>2003</strong> [L16] be<br />
an entry-level configuration for up to 75 users. The diagram below shows a small configuration for an<br />
<strong>Exchange</strong> server with Active Directory. The four disks<br />
can be connected directly to the internal onboard<br />
SCSI or SATA controller. The RAID 1 functionality<br />
can be realized either with the disk mirroring of<br />
Windows <strong>Server</strong> <strong>2003</strong> or with the zero-channel RAID<br />
option of the 1-channel onboard Ultra 320 SCSI<br />
RAID 1<br />
RAID 1<br />
controller »LSI 1020A« or with the 4-port SATA<br />
OS, OS, AD, AD, Logs,<br />
Store<br />
Logs, Queues Queues<br />
controller »Promise FastTrak S150-TX4« with HW-<br />
RAID. Provided the system is secured by UPS, the<br />
controller and disk caches should be activated.<br />
Two mirrored system drives are set up. One partition is created for each system drive. The first system drive<br />
is used for the operating system, Active Directory, log files and queues, the second one is solely for the<br />
<strong>Exchange</strong> databases (store).<br />
In no way should a RAID 5 array be made from the four hard disks which are used in the PRIMERGY<br />
TX150 S4. Neither should a saving of a hard disk be made nor a RAID 5 array be made with only three hard<br />
disks. Instead of the two RAID 1 arrays consisting of four hard disks a RAID 1 array of only two largecapacity<br />
hard disks should not be made, either. This would have a fatal effect on the system performance<br />
because the RAID 5 is on the one hand considerably slower than RAID 1 and on the other hand the<br />
operating system and all user data with different access patterns would then be on one volume. The<br />
performance would no longer be acceptable.<br />
With a maximum of 75 users under Small Business <strong>Server</strong> <strong>2003</strong> the <strong>Exchange</strong> database for the mailboxes<br />
will – on condition of the assumptions specified initially for an average mailbox size of 100 MB per user and<br />
100% reserve for database objects - grow to about the size of 100 MB × 75 users × 2 = 15 GB. A DDS Gen5<br />
drive is recommended as backup drive.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 49 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
If you equip the PRIMERGY TX150 S4 with a more powerful processor, e.g. a Pentium D 820, and main<br />
memory of 2 GB, it is by all means possible to manage up to 200 <strong>Exchange</strong> users. However, the Small<br />
Business <strong>Server</strong> <strong>2003</strong>, which is limited to 75 users, is no longer sufficient, the products Windows <strong>Server</strong><br />
<strong>2003</strong> Standard Edition and <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Standard Edition should be used. The database for the<br />
mailboxes of 200 users will with a maximum mailbox size of 100 MB grow to approximately<br />
100 MB × 200 users × 2 = 40 GB. Therefore, the VXA-2 should be used as backup medium, as otherwise<br />
more than one tape may possibly be required for a total backup.<br />
A server in the small and medium-sized enterprise (SME) environment or in branch offices is frequently also<br />
used as a file and print server because it is often the only server on site. For simple print services it is<br />
sufficient to have a somewhat more powerful processor, e.g. Pentium D 940, and more memory. However,<br />
for additional file-server services that go beyond occasional accesses the hard disks are inadequate. In this<br />
case, at least two further hard disks should be added in a secure RAID 1 array, which means the extension<br />
by a PRIMERGY SX30 with 14 additional hard disks is possible. The PRIMERGY SX30 is available as a 1-<br />
or 2-channel version. Which PRIMERGY SX30 version is preferred depends on the field of application. In the<br />
SME environment, where - in addition to the RAID arrays for the <strong>Exchange</strong> databases - further RAID arrays<br />
are built in the PRIMERGY SX30, the 2-channel variant with an appropriate RAID controller is<br />
recommended.<br />
RAID 1<br />
OS<br />
RAID 1<br />
Queues<br />
RAID 1<br />
AD<br />
RAID 1<br />
Logs<br />
In case of more disks and more memory this configuration can as a dedicated <strong>Exchange</strong> <strong>Server</strong> by all means<br />
serve up to 700 users. In this case, it is advisable to use a 1-channel PRIMERGY SX30 and, if necessary, a<br />
second PRIMERGY SX30 as a supplement if additional data volume is required for the <strong>Exchange</strong> databases.<br />
The above picture shows a sample configuration with a 2-channel RAID<br />
controller and one PRIMERGY SX30.<br />
The PRIMERGY TX150 and PRIMERGY SX30 are alternatively also<br />
offered as rack solutions. The operational area would then be less as<br />
an office server and more as a dedicated server in a data center or with<br />
an application service provider (ASP).<br />
RAID 1+0<br />
Store<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 50 (69)<br />
... 6 ...<br />
... 4 ...<br />
RAID 1+0 or RAID 5<br />
File Sharing, etc
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY TX200 S3<br />
The PRIMERGY TX200 S3 is the »big brother« of<br />
the PRIMERGY TX150 S4. Also designed as an allround<br />
server in the guise of a tower server, the<br />
PRIMERGY TX200 S3 offers the computing<br />
performance of two Intel Xeon DP dual-core<br />
processors, a maximum of nine internal hot-plug<br />
hard disks and a RAID-compliant onboard SCSI,<br />
SATA or SAS controller. With the assistance of two<br />
additional 2-channel PCI-RAID controllers it is<br />
possible to connect up to four PRIMERGY SX30<br />
with up to 56 hard disks externally.<br />
Like the PRIMERGY TX150 S4, the PRIMERGY<br />
TX200 S3 is also available in a rack version.<br />
However, the rack-optimized systems, PRIMERGY<br />
RX200 S3 resp. PRIMERGY RX300 S3, should with<br />
the same computing performance be of greater<br />
interest for use in the rack.<br />
As a floor-stand version the PRIMERGY TX200 S3<br />
Disks internal 6 × SAS/SATA (optional 9 × SCSI)<br />
is ideally suited both for the SME segment and for<br />
Disks external max. 56<br />
branch offices where more computing performance<br />
Onboard LAN 1 × 10/100/1000 Mbit<br />
is needed than a mono processor system can<br />
provide. In such an environment there are - as<br />
already discussed with the PRIMERGY TX150 S4 - mostly additional tasks for a server. Since in these<br />
environments only one server at most is installed, the latter must in addition to <strong>Exchange</strong> also perform other<br />
services, such as Active Directory, DNS, file and print service. The following picture shows a sample<br />
configuration for such tasks.<br />
RAID 1<br />
OS<br />
RAID 1<br />
Logs<br />
Processors 2 × Xeon DP 5050/5060,<br />
3.0/3.2 GHz, 2 × 2 MB SLC or<br />
2 × Xeon DP 5110/5120/5130/5140<br />
1.6/1.8/2.0/2.3 GHz, 4 MB SLC<br />
Memory max. 16 GB<br />
Onboard RAID 2-channel SCSI or 2-port SATA or<br />
8-port SAS with 0-ch. RAID controller<br />
PCI RAID controller 2 × 2-channel SCSI<br />
RAID 1<br />
AD<br />
RAID 1+0<br />
Store<br />
The six internal hard disks are mirrored in pairs (RAID 1). The first system drive is used for the operating<br />
system, the second one for the Active Directory and the third one for queues and restore. Two of the external<br />
hard disks are planned as a RAID 1 for log files. Six hard disks in a RAID 10 array house the <strong>Exchange</strong><br />
databases (store) and the data area for file sharing. Provided the system is secured by UPS, the controller<br />
and disk caches should be activated for performance reasons.<br />
Using this disk subsystem as well as two Xeon DP 51xx processors and 3 GB of main memory it is by all<br />
means possible to manage up to 700 users. Depending on the requirements made of disk capacity an<br />
upgrade with a second PRIMERGY SX30 is also possible. As a dedicated <strong>Exchange</strong> <strong>Server</strong> equipped with a<br />
good CPU and memory and when fast hard disks with 15 krpm are used, it is by all means possible to<br />
manage up to 1,000 users of a branch office or a medium-sized business.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 51 (69)<br />
... 6 ...<br />
RAID 1<br />
Queues<br />
... 6 ...<br />
RAID 1+0<br />
File Sharing, etc
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option.<br />
For more details see Microsoft Knowledge Base Article Q823440.<br />
If we take the general conditions specified at the beginning of this chapter as a basis that a mailbox is at<br />
most 100 MB and we give the space requirement for deleted mails that have not yet been removed from the<br />
database factor two, we then need e.g. for 700 users 700 users × 100 MB × 2 = 140 GB and for 1,200 users<br />
1200 users × 100 MB × 2 = 240 GB of disk space for the <strong>Exchange</strong> databases. However, an individual<br />
database should not be greater than 100 GB, as otherwise the restore time of a database can be more than<br />
four hours. Thus to enable fast restore times several small databases should be preferred. As already<br />
explained in chapters Transaction principle and Backup, <strong>Exchange</strong> <strong>Server</strong> supports up to 20 databases,<br />
which are organized in groups of at most five databases in so-called storage groups (SG). If the rule of filling<br />
up individual storage groups with databases before creating a further storage group was valid for versions<br />
prior to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> on account of the administration overhead, it is recommended to already<br />
open an additional storage group for more than two databases for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> (see Microsoft<br />
Knowledgebase Article Q890699). If organizational reasons do not advise any other distribution, one storage<br />
group with two databases would be used for 700 users and two storage groups each with two databases for<br />
1,200 users.<br />
In accordance with the assumptions made initially of a log file size of 3 MB per user and day, a disk space<br />
requirement of 15 GB is needed for 7 days for 700 users and about 26 GB for 1,200 users. Thus it is<br />
sufficient to form a secure RAID 1 from two 36-GB hard disk for this purpose.<br />
On account of the database size the VXA-320 or LTO2 with a tape capacity of 160 or 200 GB is suited as<br />
backup medium with 700 users. An LTO3 with a tape capacity of 400 GB should be used with 1,200 users,<br />
as otherwise more than one tape may possibly be required for a total backup.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 52 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX100 S3<br />
The PRIMERGY RX100 S3 is a rack-optimized server<br />
of only one height unit (1U).<br />
The PRIMERGY RX100 S3 was especially designed<br />
for use in server farms, appliances, front-end solutions<br />
and hard-disk-less solutions for Internet and<br />
application service providers (ASP). In other words, for<br />
applications in which it is important to use many<br />
servers in a very confined space.<br />
The PRIMERGY RX100 S3 offers two internal SATA<br />
hard disks with an integrated RAID 1 controller.<br />
Despite its compact design, one full-height PCI slot<br />
and a low-profile PCI slot, each of half length, are<br />
available.<br />
Processors 1 × Celeron D 346<br />
3.06 GHz, 256 SLC or<br />
1 × Pentium 4 631<br />
3.0 GHz, 2 MB SLC or<br />
1 × Pentium D 820<br />
2.8 GHz, 2 × 1 MB SLC or<br />
1 × Pentium D 930/940/950<br />
3.0/3.2/3.4 GHz, 2 × 2 MB SLC<br />
Memory max. 8 GB<br />
Onboard RAID SATA<br />
Disks internal max. 2<br />
Disks external none<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
The computing performance is comparable with that of<br />
a PRIMERGY TX150 S4. However, due to the rack<br />
optimization the configuration options are somewhat restricted. The internal hard disks meet the<br />
requirements of the operating system, but an external disk subsystem has to be connected for the data of<br />
<strong>Exchange</strong>. Unfortunately, neither SCSI-RAID controllers nor Fibre-Channel controllers can be used. Thus the<br />
PRIMERGY RX100 S3 is - in connection with a Network Attached Storage (NAS) together with Windows<br />
Storage <strong>Server</strong> <strong>2003</strong> or with an iSCSI-compliant storage system - only suited for branch offices or ASP data<br />
centers that provide their customers with cost-effective dedicated <strong>Exchange</strong> <strong>Server</strong>s on this basis.<br />
In connection with an adequately equipped LAN-based disk subsystem the PRIMERGY RX100 S3 can<br />
manage up to 250 users. To connect external backup drives either a PRIMERGY SX10 is used in<br />
conjunction with 5¼“ devices, or backup devices in 19“ format are used.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 53 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX200 S3<br />
Like the PRIMERGY RX100 S3, the<br />
PRIMERGY RX200 S3 is an optimized<br />
computing node with one height unit (1U).<br />
However, in contrast to the PRIMERGY<br />
RX100 S3, the PRIMERGY RX200 S3 has<br />
despite its small height - two dual-core<br />
processors and four SAS hard disks to<br />
offer. Moreover, the PRIMERGY RX200 S3<br />
already has an onboard RAID-complaint<br />
SAS controller for the internal hard disks<br />
and 2 GB LAN interfaces. For further<br />
scaling a 2-channel RAID controller can be<br />
used so that it can be ideally combined with<br />
one or two PRIMERGY SX30. As an<br />
alternative to an SCSI-based disk<br />
subsystem it is also possible to connect a<br />
SAN through up to two 1-channel Fibre-<br />
Channel controllers.<br />
Processors 2 × Xeon DP 5050/5060/5080<br />
3.0/3.2/3.73 GHz, 2 × 2 MB SLC or<br />
2 × Xeon DP<br />
5110/5120/5130/5140/5150/5160<br />
1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC<br />
Memory max. 32 GB<br />
Onboard RAID SAS / SATA<br />
PCI SCSI controller 2 × 1-channel<br />
1 × 2-channel RAID<br />
Disks internal 2 × SAS or SATA 3½” or<br />
4 × SAS or SATA 2½”<br />
Disks DAS external 28<br />
Fibre-Channel max. 2 channels<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
In a configuration with two Xeon DP 51xx<br />
processors and 3 GB of main memory it is by all means possible to efficiently manage up to 1,200 mail users<br />
with the PRIMERGY RX200 S3 and an appropriate configuration with a PRIMERGY SX30.<br />
RAID 1<br />
OS, AD<br />
RAID 1<br />
Logs<br />
RAID 1<br />
Queues<br />
RAID 1+0<br />
Store<br />
For this purpose the four internal hard disks are mirrored in pairs (RAID 1) through the onboard SAS<br />
controller. The first logical drive is used for the operating system and for the Active Directory, the second<br />
logical drive for queues and restore. The log files are stored on two external hard disks configured as a<br />
RAID 1. The remaining eight hard disks are put together to form a RAID 10 and house the <strong>Exchange</strong><br />
databases (store). You should not be misled into believing that you can reduce the number of hard disks by<br />
using a RAID 5 instead of a RAID 10 or by using large-capacity hard disks. The number of hard disks is<br />
calculated from the anticipated I/O accesses per second. The efficiency of the individual hard disks and<br />
RAID levels was discussed in detail in chapter Hard disks. In this scenario for approximately 1,200 users this<br />
means an I/O rate of 1200 users × 0.6 IO/user/s = 720 IO/s for the <strong>Exchange</strong> databases. If you take hard<br />
disks with 15 krpm as a basis, you will need six hard disks to satisfy this I/O rate and if you use hard disks<br />
with 10 krpm, eight disks will be required.<br />
Provided the system is UPS secure, the controller and disk caches should be activated to improve<br />
performance.<br />
As an alternative to an SCSI-based disk subsystem it is also possible to use a Fibre-Channel-based SAN.<br />
Regardless of the disk subsystem used, the logical distribution of the hard disks should be analog to the<br />
SCSI solution.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 54 (69)<br />
... 8 ...
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
To connect external backup drives either a PRIMERGY SX10 is used in conjunction with 5¼“ devices, or<br />
backup devices in 19“ format are used. On account of the data volume with 1,200 users either an LTO3 drive<br />
or a tape library that automatically changes the tapes should be used.<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option.<br />
For more details see Microsoft Knowledge Base Article Q823440.<br />
For 1,200 planned <strong>Exchange</strong> users and in case of the general conditions specified at the beginning of this<br />
chapter a space requirement of up to 1200 users × 100 MB × 2 = 240 GB is needed for the <strong>Exchange</strong><br />
databases. However, an individual database should not be greater than 100 GB, as otherwise the restore<br />
time of a database can be more than four hours. Thus to enable fast restore times several small databases<br />
should be preferred. As already explained in chapters Transaction principle and Backup, <strong>Exchange</strong> <strong>Server</strong><br />
supports up to 20 databases, which are organized in groups of at most five databases in so-called storage<br />
groups (SG). If the rule of filling up individual storage groups with databases before creating a further storage<br />
group was valid for versions prior to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> on account of the administration overhead, it is<br />
recommended to already open an additional storage group for more than two databases for <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> (see Microsoft Knowledgebase Article Q890699). If organizational reasons do not advise any<br />
other distribution, two storage groups each with two databases would be used for 1,200 users.<br />
Disk space requirement of 26 GB should be planned for the log files. This follows from the assumptions<br />
made initially of a log file requirement of 3 MB per user and day.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 55 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX220<br />
Like the PRIMERGY RX200 S3, the PRIMERGY<br />
RX220 is an optimized computing node with two<br />
processors and one height unit (1U). In contrast to<br />
the PRIMERGY RX200 S3 which is based on Intel<br />
Xeon architecture, the PRIMERGY RX220 is based<br />
on AMD Opteron architecture. The PRIMERGY<br />
RX220 offers an onboard RAID-complaint SATA<br />
controller for the two internal hot-plug SATA hard<br />
disks and 2 GB LAN interfaces. Two PCI slots are<br />
available for further scaling. Moreover, a 2-channel<br />
RAID controller can be used so that it can be ideally<br />
combined with one or two PRIMERGY SX30.<br />
Alternatively, the PRIMERGY RX220 can also be<br />
connected to a SAN through a Fibre-Channel<br />
controller.<br />
In a configuration with two AMD Opteron 280<br />
processors and 3 GB of main memory it is by all<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
means possible to efficiently manage up to 1,200 mail users with the PRIMERGY RX220 and an appropriate<br />
configuration with a PRIMERGY SX30 or a SAN disk subsystem of an appropriate performance level.<br />
RAID 1<br />
OS, AD<br />
RAID 1<br />
Queues<br />
Processors 2 × Opteron DP 246 – 256<br />
2.0 - 3.0 GHz, 1 MB SLC or<br />
2 × Opteron DP 265 – 280<br />
1.8 - 2.4 GHz, 2 × 1 MB SLC<br />
Memory max. 28 GB<br />
Onboard RAID SATA<br />
PCI SCSI controller 1 × 1-channel<br />
1 × 2-channel RAID<br />
PCI FC controller 1 × 2-Channel<br />
Disks internal 2 × SATA 3½”<br />
Disks DAS external max. 28 SCSI<br />
RAID 1<br />
Logs<br />
RAID 1+0<br />
Store<br />
The two internal hard disks are mirrored (RAID 1) through the onboard SATA controller and used for the<br />
operating system and the Active Directory. Two of the external hard disks are intended as RAID 1 for queues<br />
and restore. The log files are stored on two external hard disks configured as a RAID 1. The remaining eight<br />
hard disks are put together to form a RAID 10 and house the <strong>Exchange</strong> databases (store). You should not<br />
be misled into believing that you can reduce the number of hard disks by using a RAID 5 instead of a<br />
RAID 10 or by using large-capacity hard disks. The number of hard disks is calculated from the anticipated<br />
I/O accesses per second. The efficiency of the individual hard disks and RAID levels was discussed in detail<br />
in chapter Hard disks. In this scenario for approximately 1,200 users this means an I/O rate of<br />
1200 users × 0.6 IO/user/s = 720 IO/s for the <strong>Exchange</strong> databases. If you take hard disks with 15 krpm as a<br />
basis, you will need six hard disks to satisfy this I/O rate and if you use hard disks with 10 krpm, eight disks<br />
will be required.<br />
Provided the system is secured by UPS, the controller and disk caches should be activated to improve<br />
performance.<br />
As an alternative to an SCSI-based disk subsystem it is also possible to use a Fibre-Channel-based SAN.<br />
Regardless of the disk subsystem used, the logical distribution of the hard disks should be analog to the<br />
SCSI solution.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 56 (69)<br />
... 8 ...
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
To connect external backup drives either a PRIMERGY SX10 is used in conjunction with 5¼“ devices, or<br />
backup devices in 19“ format are used. On account of the data volume with 1,200 users either an LTO3 drive<br />
or a tape library that automatically changes the tapes should be used.<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option.<br />
For more details see Microsoft Knowledge Base Article Q823440.<br />
For 1,200 planned <strong>Exchange</strong> users and in case of the general conditions specified at the beginning of this<br />
chapter a space requirement of up to 1200 users × 100 MB × 2 = 240 GB is needed for the <strong>Exchange</strong><br />
databases. However, an individual database should not be greater than 100 GB, as otherwise the restore<br />
time of a database can be more than four hours. Thus to enable fast restore times several small databases<br />
should be preferred. As already explained in chapters Transaction principle and Backup, <strong>Exchange</strong> <strong>Server</strong><br />
supports up to 20 databases, which are organized in groups of at most five databases in so-called storage<br />
groups (SG). If the rule of filling up individual storage groups with databases before creating a further storage<br />
group was valid for versions prior to <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> on account of the administration overhead, it is<br />
recommended to already open an additional storage group for more than two databases for <strong>Exchange</strong><br />
<strong>Server</strong> <strong>2003</strong> (see Microsoft Knowledgebase Article Q890699). If organizational reasons do not advise any<br />
other distribution, two storage groups each with two databases would be used for 1,200 users.<br />
In accordance with the assumptions made initially with regard to the scope of the log data of 3 MB per user<br />
and day, a disk space requirement of 26 GB is needed for 7 days and 1,200 users.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 57 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX300 S3 / TX300 S3<br />
Like the PRIMERGY RX200 S3, the<br />
PRIMERGY RX300 S3 is a rackoptimized<br />
dual-core, dual processor<br />
system. However, with its height of two<br />
units (2U) it provides space for a larger<br />
number of hard disks and PCI controllers,<br />
thus enabling a greater scaling with<br />
external disk subsystems.<br />
The PRIMERGY TX300 S3 offers a<br />
comparable computing performance as a<br />
PRIMERGY RX300 S3, but due to its<br />
larger housing it is suitable for 5¼” drives,<br />
e.g. backup media.<br />
Both the PRIMERGY RX300 S3 and the<br />
PRIMERGY TX300 S2 can be equipped<br />
with two 2-channel RAID controllers, thus<br />
enabling the connection of up to four<br />
PRIMERGY SX30 with a total of 56 hard<br />
disks. It is also possible to connect a SAN<br />
as an external disk subsystem through up<br />
to six Fibre-Channel channels.<br />
RX300 S3<br />
Processors 2 × Xeon DP 5050/5060/5080<br />
3.0/3.2/3.73 GHz, 2 × 2 MB SLC or<br />
2 × Xeon DP 5110/5120/5130/5140/5150/5160<br />
1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC<br />
Memory max. 32 GB<br />
Onboard RAID SAS, optional “MegaRaid ROMB” kit<br />
PCI RAID controller 2 × 2-channel SCSI<br />
PCI FC controller 3 × 2-channel<br />
Disks internal 6 × 3½" SAS/SATA<br />
Disks DAS external max. 56 SCSI<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
Processors<br />
Memory max. 32 GB<br />
TX300 S3<br />
2 × Xeon DP 5110/5120/5130/5140/5150/5160<br />
1.66/1.86/2.0/2.33/2.66/3.0 GHz, 4 MB SLC<br />
Onboard RAID SAS, optional “MegaRaid ROMB” kit<br />
PCI RAID controller 1 × 8-port SAS<br />
2 × 2-channel SCSI<br />
PCI FC controller 3 × 2-channel<br />
Disks internal 6 × 3½" SAS/SATA, optional 8 × 3½"<br />
Disks external max. 56 SCSI<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
With two Xeon DP 51xx processors, 4 GB of main memory and a well sized disk subsystem with fast<br />
15 krpm hard disks it is possible for the PRIMERGY RX300 S3 or PRIMERGY TX300 S3 as a dedicated<br />
<strong>Exchange</strong> <strong>Server</strong> to by all means manage up to 3,000 <strong>Exchange</strong> users. The picture on the next page shows<br />
a solution with an SCSI-based disk subsystem with one 2-channel RAID controller and two PRIMERGY<br />
SX30. It goes without saying that a SAN can also be used instead of the SCSI-based disk subsystem.<br />
However, no difference results here with regard to the number of hard disks and RAID arrays.<br />
With 3,000 <strong>Exchange</strong> users an I/O rate of 3000 users × 0.6 IO/user/s = 1800 IO/s is to be expected for the<br />
user profile described in the chapter <strong>Exchange</strong> measurement methodology. Typically, <strong>Exchange</strong> has 2 /3 read<br />
and 1 /3 write accesses, resulting for a RAID 10 in 2,400 IO/s that have to be processed by the hard disks.<br />
Providing this I/O rate calls for 16 hard disks with 15 krpm. RAID 5 should be dispensed with, because as<br />
already discussed in the chapter Hard disks, each hard disk and each RAID level only has a certain I/O<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 58 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
performance. Using a RAID 5 would require 22 hard disks in comparison with only 16 hard disks with a<br />
RAID 10.<br />
Under the general conditions defined at the beginning of the chapter, the space requirement for the mailbox<br />
contents of 3,000 <strong>Exchange</strong> users is calculated to be 3000 users × 100 MB × 2 = 600 GB. With 16 hard<br />
disks in one or more RAID 10 arrays hard disks with a capacity of at least 75 GB are required. Thus, hard<br />
disks with 73 GB are somewhat too small for 3,000 users, in other words hard disks with a capacity of<br />
146 GB and 15 krpm should be used for the databases (store). With 3 MB per user and day over seven days<br />
this results for the log data in a data volume of 63 GB for 3,000 users. However, for performance and data<br />
security reasons a dedicated drive should be used per storage group for the log files. Since it is advisable to<br />
use four storage groups, this results in 63 GB / 4 ≈ 16 GB. Therefore, it is sufficient to use hard disk with the<br />
smallest available capacity of 36 GB.<br />
An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be<br />
more than four hours. Thus to enable fast restore times several small databases should be preferred. It is<br />
therefore advisable to use four storage groups each with two databases, unless there is an objection for<br />
other organizational reasons.<br />
For an <strong>Exchange</strong> <strong>Server</strong> of this size it is recommended to use dedicated hard disks per storage group for the<br />
database log files. This results in the following distribution of the disk subsystem:<br />
RAID 1<br />
Logs SG 1<br />
RAID 1<br />
Logs SG 3<br />
RAID 1<br />
OS<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 1<br />
... 4 ...<br />
... 4 ...<br />
RAID 10<br />
Store SG 3<br />
RAID 1<br />
Queues<br />
RAID 1<br />
Logs SG 2<br />
RAID 1<br />
Logs SG 4<br />
RAID 1<br />
Restore<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 2<br />
... 4 ...<br />
... 4 ...<br />
RAID 10<br />
Store SG 4<br />
The internal six hard disks are run on the onboard controller of the PRIMERGY RX300 S3. Three RAID 1<br />
pairs are formed from the six hard disks, one pair for the operating system, one for the queues and a third<br />
one for restore. The PRIMERGY SX30 with their hard disks will contain the <strong>Exchange</strong> databases and logs.<br />
For each PRIMERGY SX30 we build two RAID 1 arrays consisting of two hard disks for the log files and two<br />
RAID 10 arrays made up of fast hard disks with 15 krpm for the databases.<br />
In an installation of this size the entire system or the data center should be secured by UPS so that the<br />
caches of the controllers and the hard disks can be activated without a loss of data occurring in case of a<br />
possible power failure.<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. Already from a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option.<br />
For more details see Microsoft Knowledge Base Article Q823440.<br />
For an <strong>Exchange</strong> <strong>Server</strong> of this magnitude the Active Directory should be placed on a dedicated server.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 59 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY BX600<br />
The PRIMERGY BX600 is a scalable 19" rack server system.<br />
With only seven height units (7U) it provides space for up to<br />
ten server blades, as well as the entire infrastructure, such as<br />
gigabit LAN, Fibre-Channel and KVM (keyboard-videomouse)<br />
switch and remote management.<br />
Alternatively, the PRIMERGY BX600 rack server system can<br />
be equipped with the PRIMERGY BX630 blade with AMD<br />
Opteron processors that can be scaled from two to eight<br />
processors or with the PRIMERGY BX620 S3 blade with Intel<br />
Xeon processors.<br />
PRIMERGY BX620 S3<br />
The PRIMERGY BX620 S3 is a server blade with<br />
two Intel Xeon processors. Its computing<br />
performance is comparable to that of a PRIMERGY<br />
RX300 S3. Each PRIMERGY BX620 S3 server<br />
blade offers an onboard SAS/SATA controller with<br />
RAID functionality, two hot-plug 2½" SAS/SATA<br />
hard disks and two gigabit-LAN interfaces.<br />
Optionally, the PRIMERGY BX620 S3 can be<br />
equipped with a 2-channel Fibre-Channel interface.<br />
PRIMERGY BX630<br />
The PRIMERGY BX630 is a server blade with two<br />
AMD Opteron processors. Its computing<br />
performance is comparable to that of a PRIMERGY<br />
RX220. The PRIMERGY BX630 offers two hot-plug<br />
3½" hard disks and can be equipped with either an<br />
SAS or SATA controller. Two gigabit-LAN interfaces<br />
are available onboard and it can be optionally<br />
equipped with a 2-channel Fibre-Channel interface.<br />
However, the special feature of the PRIMERGY<br />
BX630 server blade is its scalability. Thus it is<br />
possible to couple two PRIMERGY BX630 to form a<br />
4-processor system and four PRIMERGY BX630 to<br />
form an 8-processor system.<br />
Processors 2 × Xeon DP 5050/5060/5080/5063<br />
3.0/3.2/3.73/3.2 GHz, 2 × 2 MB SLC<br />
2 × Xeon DP 5110/5120/5130/5140<br />
1.6/1.8/2.0/2.3 GHz, 4 MB SLC<br />
Memory max. 32 GB<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
Onboard RAID SAS/SATA<br />
Disks internal 2 × SAS/SATA 2½”<br />
Fibre-Channel 2 × 2 Gbit<br />
Disks external depending on SAN disk subsystem<br />
Processors 2 × Opteron DP 246 – 256<br />
2.0 – 3.0 GHz, 1MB SLC, or<br />
2 × Opteron DP 265 – 285<br />
1.8 – 2.6 GHz, 2 × 1MB SLC, or<br />
2 × Opteron MP 865 – 885<br />
1.8 – 2.6 GHz, 2 × 1MB SLC<br />
Memory max. 16 GB<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
Onboard RAID SAS/SATA<br />
Disks internal 2 × SAS/SATA 3½”<br />
Fibre-Channel 2 × 2 Gbit<br />
Disks external depending on SAN disk subsystem<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 60 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
The computing performance of a PRIMERGY BX630 equipped with two AMD Opteron 2xx dual-core<br />
processors and a PRIMERGY BX620 S3 equipped with two Xeon 51xx processors is very similar and both<br />
server blades can as dedicated <strong>Exchange</strong> <strong>Server</strong>s with a well sized SAN disk subsystem by all means<br />
manage up to 3,000 <strong>Exchange</strong> users. The diagram figuratively shows a FibreCAT CX500, but of course<br />
every other SAN disk subsystem of an adequate performance level can be used.<br />
RAID 1<br />
Queues<br />
RAID 1<br />
Logs SG 1<br />
RAID 1<br />
Logs SG 3<br />
RAID 1<br />
OS<br />
RAID 1<br />
Restore<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 1<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 3<br />
RAID 1<br />
Logs SG 2<br />
RAID 1<br />
Logs SG 4<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 2<br />
...<br />
...<br />
4 ...<br />
4 ...<br />
RAID 10<br />
Store SG 4<br />
With 3,000 <strong>Exchange</strong> users an I/O rate of 0.6 IO/user/s × 3000 users = 1800 IO/s is to be expected for the<br />
user profile described in the chapter <strong>Exchange</strong> measurement methodology. Typically, <strong>Exchange</strong> has 2 /3 read<br />
and 1 /3 write accesses, resulting for a RAID 10 in 2,400 IO/s that have to be processed by the hard disks.<br />
Providing this I/O rate calls for 16 hard disks with 15 krpm. RAID 5 should be dispensed with, because as<br />
already discussed in the chapter Hard disks, each hard disk and each RAID level only has a certain I/O<br />
performance. Using a RAID 5 would require 22 hard disks in comparison with only 16 hard disk with a<br />
RAID 10.<br />
The space requirement for the mailbox contents of 3,000 <strong>Exchange</strong> users is calculated – under the general<br />
conditions defined at the beginning of the chapter – to be 3000 users × 100 MB × 2 = 600 GB. With 16 hard<br />
disks in one or more RAID 10 arrays hard disks with a capacity of at least 75 GB are required. Thus, hard<br />
disks with 73 GB are somewhat too small for 3,000 users, in other words hard disks with a capacity of<br />
146 GB and 15 krpm should be used for the databases (store).<br />
An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be<br />
more than four hours. Thus to enable fast restore times several small databases should be preferred. It is<br />
therefore advisable to use four storage groups each with two databases, unless there is an objection for<br />
other organizational reasons. For an <strong>Exchange</strong> <strong>Server</strong> of this size it is recommended to use dedicated hard<br />
disks per storage group for the database log files. With 3 MB per user and day over seven days this results<br />
for the log data in a data volume of 63 GB for 3,000 users. Since it is advisable to use four storage groups,<br />
this results in 63 GB / 4 ≈ 16 GB. Therefore, it is sufficient to use hard disk with the smallest available<br />
capacity of 36 GB.<br />
The two internal hard disks are run as RAID 1 on the onboard SAS or SATA controller of the PRIMERGY<br />
BX620 S3 or PRIMERGY BX630 and are used to house the operating system. In the SAN two RAID 1 arrays<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 61 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
are built for queues and restore, moreover four RAID 1 arrays are used for the log files and four RAID 10<br />
arrays made up of fast hard disks with 15 krpm for the databases.<br />
In an installation of this size the entire system or the data center should be secured by UPS so that the<br />
caches of the controllers and the hard disks can be activated without a loss of data occurring in case of a<br />
possible power failure.<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. As early as with a physical memory configuration of 1 GB, Microsoft recommends using the /3GB option.<br />
For more details see Microsoft Knowledge Base Article Q823440.<br />
For an <strong>Exchange</strong> <strong>Server</strong> of this magnitude the Active Directory should be placed on a dedicated server.<br />
If you combine two PRIMERGY BX630 server blades to form a 4-processor system, between 5,000 and<br />
6,000 users can then be managed with a configuration of four AMD Opteron 8xx dual-core processors, 4 GB<br />
of main memory and an adequate disk subsystem.<br />
RAID 1<br />
Queues<br />
RAID 1<br />
Logs SG 1<br />
RAID 1<br />
Logs SG 3<br />
RAID 1<br />
OS<br />
RAID 1<br />
Restore<br />
...<br />
...<br />
6 ...<br />
4 ...<br />
RAID 10<br />
Store SG 1<br />
...<br />
...<br />
6 ...<br />
4 ...<br />
RAID 10<br />
Store SG 3<br />
RAID 1<br />
Logs SG 2<br />
RAID 1<br />
Logs SG 4<br />
...<br />
...<br />
6 ...<br />
4 ...<br />
RAID 10<br />
Store SG 2<br />
...<br />
...<br />
6 ...<br />
4 ...<br />
RAID 10<br />
Store SG 4<br />
The main difference between the disk subsystem and configuration shown on the previous page with 3,000<br />
users is the larger number of hard disks for the <strong>Exchange</strong> databases (store). The large number of 24 hard<br />
disks with 15 krpm is necessary to manage the higher I/O rate that is induced by more than 2,000 users. It<br />
goes without saying that the hard disks for queues, restore and log files must also be adapted to the higher<br />
capacity requirements.<br />
As already described, the PRIMERGY BX630 can be scaled up to an 8-processor system through the<br />
combination of four PRIMERGY BX630 server blades. However, this concentrated computing performance<br />
cannot be put to full use by <strong>Exchange</strong> as a mere 32-bit application that does not use PAE.<br />
Scaling beyond the 5,000 to 6,000 users that are already managed with a 4-processor system does not exist.<br />
Instead, a scale-out scenario should better be used for scaling of <strong>Exchange</strong> above 5,000 users and<br />
additional <strong>Exchange</strong> <strong>Server</strong>s installed on 2- or 4-processor systems. See also chapter <strong>Exchange</strong><br />
architecture.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 62 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX600 S3 / TX600 S3<br />
With four processors and a large potential for<br />
storage and PCI controllers the PRIMERGY<br />
RX600 S3 and PRIMERGY TX600 S3 represent<br />
a basis for major application servers. The<br />
PRIMERGY RX600 S3 and PRIMERGY<br />
TX600 S3 only differ as far as the form of the<br />
housing is concerned. The PRIMERGY<br />
RX600 S3 as a pure computing unit has been<br />
optimized for rack installation with four height<br />
units (4U), whereas the PRIMERGY TX600 S3<br />
uses six height units (6U). However, in return<br />
the PRIMERGY TX600 S3 also offers space for<br />
further five hard disks and accessible 5¼“<br />
devices. The PRIMERGY TX600 S3 is also<br />
available as a floor-stand version.<br />
Although these servers can include up to 64 GB<br />
of memory, only a configuration of 4 GB is<br />
practical for <strong>Exchange</strong> as a pure 32-bit<br />
application (see chapter Main memory). This<br />
also sets boundaries to the scaling of an<br />
<strong>Exchange</strong> <strong>Server</strong> – more than between 5,000<br />
and 6,000 users cannot be sensibly run on an<br />
individual <strong>Exchange</strong> <strong>Server</strong>.<br />
Due to the memory limitation a higher number of<br />
users will result particularly in an increased<br />
access to the <strong>Exchange</strong> databases. Thus the<br />
additional load with 6,000 instead of 5,000 users<br />
fully affects the disk subsystem, because in the<br />
absence of main memory the <strong>Exchange</strong> <strong>Server</strong><br />
cannot adequately buffer the database<br />
accesses. If you want to run such a large<br />
monolithic <strong>Exchange</strong> <strong>Server</strong>, you should select<br />
an efficient and generously sized Fibre-Channelbased<br />
disk subsystem that can uncouple load<br />
peaks with a cache on the disk subsystem side.<br />
Up to 5,000 users it is by all means still possible<br />
to use an SCSI-based Direct Attached Storage<br />
(DAS) subsystem, as shown on the next page. A<br />
configuration with a Fibre-Channel-based disk<br />
subsystem for 5,000 users was already<br />
illustrated in the previous chapter PRIMERGY<br />
BX600.<br />
RX600 S3<br />
Processors 4 × Xeon MP<br />
3.16/3.66 GHz, 1 MB SLC, or<br />
4 × Xeon MP 7020/7030<br />
2.67/2.80 GHz, 2 × 1 MB SLC, or<br />
4 × Xeon MP 7041<br />
3.0 GHz, 2 x 2 MB SLC<br />
Memory max. 64 GB<br />
Onboard RAID 2-channel SCSI<br />
PCI RAID controller up to 2 × 2-channel SCSI<br />
PCI FC controller up to 4 × 2-channel<br />
Disks internal max. 5<br />
Disks DAS external max. 56<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
TX600 S3<br />
Processors 4 × Xeon MP<br />
3.16/3.66 GHz, 1 MB SLC, or<br />
4 × Xeon MP 7020/7030<br />
2.67/2.80 GHz, 2 x 1 MB SLC, or<br />
4 × Xeon MP 7041<br />
3.0 GHz, 2 x 2 MB SLC<br />
Memory max. 64 GB<br />
Onboard RAID 2-channel SCSI<br />
PCI RAID controller 2 × 2-channel SCSI<br />
PCI FC controller up to 4 × 2-channel<br />
Disks internal max. 10<br />
Disks DAS external max. 56<br />
Onboard LAN 2 × 10/100/1000 Mbit<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 63 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
For 5,000 users the PRIMERGY RX600 S3 or PRIMERGY TX600 S3 should be equipped with four Xeon<br />
7041 processors and 4 GB of main memory. An I/O rate of 5000 users × 0.6 IO/user/s = 3000 IO/s is to be<br />
expected for the user profile described in the chapter <strong>Exchange</strong> measurement methodology. Typically,<br />
<strong>Exchange</strong> has 2 /3 read and 1 /3 write accesses, resulting for a RAID 10 in 4,000 IO/s that have to be<br />
processed by the hard disks. Providing this I/O rate calls for 24 hard disks with 15 krpm. RAID 5 should be<br />
dispensed with, because as already discussed in the chapter Hard disks, each hard disk and each RAID<br />
level only has a certain I/O performance. Using a RAID 5 would require 36 hard disks in comparison with<br />
only 24 hard disk with a RAID 10.<br />
Under the general conditions defined at the beginning of the chapter, the space requirement for the mailbox<br />
contents of 5,000 <strong>Exchange</strong> users is calculated to be 5000 users × 100 MB × 2 = 1 TB. With 24 hard disks in<br />
one or more RAID 10 arrays hard disks with a capacity of at least 84 GB are required. Thus, hard disks with<br />
a capacity of 146 GB should be used for the databases (store).<br />
An individual database should not be larger than 100 GB, as otherwise the restore time of a database can be<br />
more than four hours. Thus to enable fast restore times several small databases should be preferred. It is<br />
therefore advisable, unless there is an objection for other organizational reasons, to use four storage groups<br />
each with three databases.<br />
With 5,000 users planning must be made for about 105 GB for log files, providing the assumption made at<br />
the beginning of the chapter of 3 MB per user and day for a 7-day stock of log files is taken as the basis. For<br />
performance and data security reasons it is advisable to set up a separate drive for the log files for each<br />
storage group. For four storage groups 105 GB / 4 ≈ 27 GB are needed per log file volume for this purpose. It<br />
is therefore sufficient to use hard disks with a capacity of 36 GB.<br />
For an <strong>Exchange</strong> <strong>Server</strong> of this size it is recommended to use dedicated hard disks per storage group for the<br />
database log files. This results in the following distribution of the disk subsystem:<br />
RAID 1<br />
OS<br />
RAID 1<br />
Logs<br />
RAID 1<br />
Queues<br />
Restore<br />
The internal hard disks of the PRIMERGY RX600 S3 are run on the onboard SAS controller. A RAID 1 array<br />
is used for the operating system and a second RAID 1-array for the SMTP queues. The fifth hard disk is<br />
planned as temporary disk space for restore, and as only temporary data are to be found here it need not be<br />
mirrored.<br />
The two 2-channel RAID controllers control the four 1-channel PRIMERGY SX30s with their hard disks will<br />
contain the <strong>Exchange</strong> databases and logs. For each PRIMERGY SX30 a RAID 1 array is set up for the log<br />
files and a RAID 10 array made up of fast hard disks with 15 krpm for the databases.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 64 (69)<br />
... 6 ...<br />
... 6 ...<br />
... 6 ...<br />
... 6 ...<br />
RAID 10<br />
Stores
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
For an <strong>Exchange</strong> <strong>Server</strong> of this magnitude dedicated servers should be used for the Active Directory so that<br />
in this system no hard disks have to be provided for the database of the Active Directory.<br />
An <strong>Exchange</strong> <strong>Server</strong> of this dimension should be protected by a UPS against power failures. Then the<br />
controller and hard disk caches can also be activated without hesitation to improve performance.<br />
For optimal usage of the main memory the BOOT.INI options /3GB and /USERVA should be used (see<br />
chapters Main memory and Operating system), thus the virtual address space of 4 GB is distributed in favor<br />
of applications at a ratio of 3:1 – 3 GB for the application and 1 GB for the kernel. The standard distribution is<br />
2:2. Since <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> requires approximately 1.8 times the virtual address space compared with<br />
a physical memory, <strong>Exchange</strong> would – with a standard division of 2 GB for applications – not be able to use<br />
the physical memory of 2 GB, but only 1.1 GB. Comparison measurements with and without /3GB have<br />
shown a gain in performance here of 28%. For more details see Microsoft Knowledge Base Article Q823440.<br />
A feature of this system‟s chipset is to fully reserve the address area of 3 to 4 GB for the addressing of<br />
controllers. To use the underlying physical memory in this address, the chipset can make it visible in the<br />
address area of 4 to 5 GB. Thus with a configuration of more than 3 GB of physical memory PAE should -<br />
contrary to the recommendations elsewhere - be activated to enable the memory area above 4 GB to be<br />
addressed.<br />
See Microsoft Knowledge Base Article Q815372 for more information about the optimization of large<br />
<strong>Exchange</strong> <strong>Server</strong>s.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 65 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
PRIMERGY RX800 S2<br />
The PRIMERGY RX800 S2 is the flagship of the<br />
PRIMERGY family. The PRIMERGY RX800 S2 can be<br />
scaled from a 4-processor to a 16-processor system in<br />
increments of four processors. The maximum<br />
configuration provides 256 GB of main memory.<br />
Unfortunately <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> with its limitations<br />
as regards memory usage and the high dependence on<br />
the disk I/O does not make full use of the possible<br />
computing performance of a PRIMERGY RX800 S2.<br />
Therefore, the PRIMERGY RX800 S2 is hardly used in<br />
the field of stand-alone <strong>Exchange</strong> <strong>Server</strong>s.<br />
Processors 4 - 16 × Xeon MP 7020<br />
2.67 GHz, 2 × 1 MB SLC, or<br />
4 - 16 × Xeon MP 7040<br />
3.0 GHz, 2 × 2 MB SLC<br />
Memory max. 265 GB<br />
Onboard RAID SAS<br />
PCI RAID controller 16 × 2-channel SCSI<br />
However, the PRIMERGY RX800 S2 is an adequate<br />
PCI FC controller up to 6 × 1-channel<br />
system for the setting-up of high-availability high-end<br />
Disks internal 6 × SAS<br />
<strong>Exchange</strong> clusters on the basis of Windows <strong>Server</strong> <strong>2003</strong> Onboard LAN 2 × 10/100/1000 Mbit per unit<br />
Datacenter Edition. Under Windows <strong>Server</strong> <strong>2003</strong><br />
Datacenter Edition it is possible to run up to eight PRIMERGY RX800 S2 together in a cluster, with six or<br />
seven servers typically being run actively and one or two servers passively so that these can stand in if one<br />
of the active servers in the cluster fails. A high-availability <strong>Exchange</strong> cluster, which can manage up to about<br />
35,000 (7 × 5,000) active users, can be realized on this model.<br />
PRIMERGY RXI300 / RXI600<br />
The PRIMERGY RXI300 and PRIMERGY RXI600 are<br />
based on Intel‟s state-of-the-art 64-bit platform with<br />
Itanium®2 processor architecture. However, since<br />
<strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> has until now not been available<br />
in a 64-bit version, an <strong>Exchange</strong> <strong>Server</strong> is not possible<br />
on the basis of a PRIMERGY RXI300 or PRIMERGY<br />
RXI600.<br />
Processors 2 or 4 × Itanium2<br />
1.5 GHz, 4 MB TLC<br />
1.6 GHz, 9 MB TLC<br />
Memory 16 or 32 GB<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 66 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Summary<br />
If we summarize the results<br />
discussed above<br />
once more in a clearly arranged<br />
way, the outcome<br />
is the diagram opposite.<br />
(Note: the horizontal axis<br />
is not proportionate.) You<br />
can immediately see that<br />
many of the PRIMERGY<br />
systems overlap or are<br />
even on a par as regards<br />
their efficiency concerning<br />
<strong>Exchange</strong>. A well<br />
equipped PRIMERGY<br />
RX300 S3 can serve just<br />
as many users as a<br />
smaller configuration of a<br />
PRIMERGY RX600 S3.<br />
There are no strict limits<br />
because - as already discussed<br />
in-depth at the<br />
outset - the real number<br />
of users depends on the<br />
customer-specific load<br />
profile. In the chart this<br />
fact is depicted by the<br />
gradient of the bars.<br />
PRIMERGY<br />
RX100 S3<br />
Econel 200<br />
Econel 100<br />
TX150 S4<br />
RX220<br />
RX200 S3<br />
TX200 S3<br />
RX300 S3<br />
TX300 S3<br />
TX600 S3<br />
RX600 S3 Cluster<br />
BX600 Cluster<br />
RX300 S3 Cluster<br />
BX630 dual<br />
BX620 S3<br />
BX630 quad<br />
RX600 S3<br />
RX800 Cluster<br />
Cluster<br />
Solutions<br />
Blade<br />
Solutions<br />
Rack<br />
Solutions<br />
Tower<br />
Solutions<br />
Economy<br />
Solutions<br />
50 100 250 500 1000 1500 2500 5000 15000 35000 User<br />
Which system is ultimately the most suitable depends on the customer requirements? For example, is a<br />
floor-stand or a rack required, is the backup to be performed internally or externally, does the customer have<br />
growth potential and does he require expandability, etc.<br />
Irrespective of the performance of the hardware, maximum scaling in the scale-up scenario is as a result of<br />
limitations in the <strong>Exchange</strong> software reached at approximately 5,000 active users per server. A larger<br />
number of users can be achieved in a scale-out scenario through additional servers. In this case, it is<br />
sensible to use clustered solutions because these also provide redundancy against hardware failure. In this<br />
way, clusters for up to approximately 35,000 users can be set up.<br />
However, when sizing an <strong>Exchange</strong> server it is by all means necessary to analyze the customer<br />
requirements and the existing infrastructure, such as network, Active Directory, etc. These must then be<br />
prudently incorporated in the planning of the <strong>Exchange</strong> server or the <strong>Exchange</strong> environment. A number of<br />
influences, such as backup data volume, can be calculated directly, other influences can only be estimated<br />
or weighed up on the basis of empirical values. Thus when configuring a medium-sized to large-scale<br />
<strong>Exchange</strong> server, detailed planning and consulting are an absolute necessity.<br />
At this point it must be expressly pointed out once again that all the data used in this paper has been<br />
determined on the basis of the medium user profile, but in so doing did not aim for and optimize the<br />
maximum possible value. All measurements were based on RAID systems protected against hardware<br />
failure and on all systems adequate computing performance was also available for active virus protection<br />
and backup. Our competitors very frequently quote benchmark results for the efficiency of their systems.<br />
However, these are generally based on insecure RAID 0 arrays and leave no room for virus protection,<br />
server management or the like.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 67 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
References<br />
[L1] General information about <strong>Fujitsu</strong> products<br />
http://ts.fujitsu.com<br />
[L2] General information about the PRIMERGY product family<br />
http://ts.fujitsu.com/primergy<br />
[L3] PRIMERGY Benchmarks – Performance Reports and <strong>Sizing</strong> <strong>Guide</strong>s<br />
http://ts.fujitsu.com/products/standard_servers/primergy_bov.html<br />
[L4] Performance Report – iSCSI and iSCSI Boot<br />
http://docs.ts.fujitsu.com/dl.aspx?id=108eca7c-2412-4e59-99dd-3f96721f4127<br />
[L5] Performance Report – Modular RAID<br />
http://docs.ts.fujitsu.com/dl.aspx?id=8f6d5779-2405-4cdd-8268-1f948ba050e6<br />
[L6] Microsoft <strong>Exchange</strong> <strong>Server</strong><br />
www.microsoft.com/exchange/default.mspx<br />
[L7] What's new in <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
www.microsoft.com/downloads/details.aspx?FamilyID=84236bd9-ac54-4113-b037c04a96a977fd&DisplayLang=en<br />
[L8] Performance Benchmarks for Computers Running <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
http://technet.microsoft.com/en-us/library/cc507123.aspx<br />
[L9] Downloads for <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong><br />
http://technet.microsoft.com/en-us/exchange/bb288488.aspx<br />
[L10] <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Technical Documentation Library<br />
www.microsoft.com/technet/prodtechnol/exchange/<strong>2003</strong>/library/default.mspx<br />
[L11] <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Performance and Scalability <strong>Guide</strong><br />
www.microsoft.com/technet/prodtechnol/exchange/<strong>2003</strong>/library/perfscalguide.mspx<br />
[L12] Troubleshooting <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Performance<br />
http://www.microsoft.com/technet/prodtechnol/exchange/<strong>2003</strong>/library/e2k3perf.mspx<br />
[L13] <strong>Exchange</strong> 2000 <strong>Server</strong> Resource Kit<br />
http://www.microsoft.com/technet/prodtechnol/exchange/2000/library/reskit/default.mspx<br />
[L14] System Center Capacity Planner 2006<br />
www.microsoft.com/windowsserversystem/systemcenter/sccp/default.mspx<br />
[L15] Microsoft Operations Manager<br />
www.microsoft.com/mom/default.mspx<br />
[L16] Windows Small Business <strong>Server</strong> <strong>2003</strong><br />
www.microsoft.com/windowsserver<strong>2003</strong>/sbs/default.mspx<br />
[L17] Windows <strong>Server</strong> <strong>2003</strong> Active Directory<br />
www.microsoft.com/windowsserver<strong>2003</strong>/technologies/directory/activedirectory/default.mspx<br />
[L18] Physical Address Extension - PAE Memory and Windows<br />
www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx<br />
[L19] Microsoft TechNet<br />
http://www.microsoft.com/technet/<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 Page 68 (69)
White Paper <strong>Sizing</strong> <strong>Guide</strong> <strong>Exchange</strong> <strong>Server</strong> <strong>2003</strong> Version: 4.2, July 2006<br />
Document History<br />
Version Date of publication <strong>Exchange</strong> Version<br />
Version 1.0 March 1997 4.0<br />
Version 2.0 July 1999 5.5<br />
Version 3.0 February 2002 2000<br />
Version 3.1 September 2002 2000<br />
Version 4.0 February 2004 <strong>2003</strong><br />
Version 4.1 July 2006 <strong>2003</strong><br />
Version 4.2 July 2006 <strong>2003</strong><br />
Contacts<br />
PRIMERGY Performance and Benchmarks<br />
mailto:primergy.benchmark@ts.fujitsu.com<br />
PRIMERGY Product Marketing<br />
mailto:Primergy-PM@ts.fujitsu.com<br />
Delivery subject to availability, specifications subject to change without<br />
notice, correction of errors and omissions excepted.<br />
All conditions quoted (TCs) are recommended cost prices in EURO excl. VAT<br />
(unless stated otherwise in the text). All hardware and software names used<br />
are brand names and/or trademarks of their respective holders.<br />
© <strong>Fujitsu</strong> Technology Solutions, 2009 mailto:primergy.benchmark@ts.fujitsu.com<br />
Page 69 (69)<br />
Copyright © <strong>Fujitsu</strong> Technology Solutions GmbH 2009<br />
Delivery subject to availability, specifications subject to change without<br />
notice, correction of errors and omissions excepted.<br />
All conditions quoted (TCs) are recommended cost prices in EURO excl. VAT<br />
Published by department:<br />
Enterprise Products<br />
PRIMERGY <strong>Server</strong><br />
PRIMERGY Performance Lab<br />
Published by department:<br />
Enterprise Products<br />
Internet:<br />
http://ts.fujitsu.com/primergy<br />
Extranet:<br />
http://partners.ts.fujitsu.com/com/products/serv<br />
ers/primergy<br />
Internet:<br />
http://ts.fujitsu.com/primergy<br />
Extranet: