23.03.2013 Views

Agile Performance Testing - Testing Experience

Agile Performance Testing - Testing Experience

Agile Performance Testing - Testing Experience

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

to create high, spiking, random loads. We would sustain these<br />

loads for up to 48 hours. If none of the systems crashed, none<br />

of the network bridges lost connectivity to one or both networks,<br />

no data were lost, and if no applications terminated abnormally,<br />

then that was a passing test result.<br />

Figure 5: Maximum network configuration for the operating system<br />

Lesson 3: Test the Tests - and the Models<br />

Okay, so now it’s clear that realism of the environment and the<br />

load are important. I mentioned earlier that performance, load,<br />

and reliability testing are complex endeavors, and that means<br />

that it’s easy to screw them up. These two areas of realism are<br />

especially easy to screw up.<br />

So, if realism of environment and load is important, how can we<br />

be sure we got it right, before we go live or ship? The key here is<br />

to build a model or simulation of your system’s performance and<br />

use the model or simulation both to evaluate the performance of<br />

the system during the design phase and to evaluate the tests during<br />

test execution. In turn, since the model and simulation are<br />

equally subject to error, the tests also can evaluate the models.<br />

Furthermore, a model or simulation which is validated through<br />

testing is useful for predicting future performance and for doing<br />

various kinds of performance “what-iffing” without having to resort<br />

to expensive performance tests as frequently.<br />

There are a couple of key tips to keep in mind regarding modeling<br />

and simulating. First, spreadsheets are good for initial performance,<br />

load, and reliability modeling and design. Second, once<br />

these spreadsheets are in place, you can use them as design documents<br />

both for the actual system and for a dynamic simulation.<br />

The dynamic simulations allow for more “what-if” scenarios and<br />

are useful during design, implementation, and testing.<br />

Here’s a case study of using models and simulations properly during<br />

the system test execution period. On an Internet appliance<br />

project, we needed to test server-side performance and reliability<br />

at projected loads of 25,000, 40,000, and 50,000 devices in the<br />

field. In terms of realism, we used a production environment and<br />

a realistic load generator, which we built using the development<br />

team’s unit testing harnesses. The test environment, including<br />

the load generators and functional and beta test devices, are<br />

shown in Figure 6. Allowing some amount of functional and beta<br />

testing of the devices during certain carefully selected periods of<br />

the performance, load, and reliability tests gave the test team and<br />

the beta users a realistic sense of user experience under load.<br />

Figure 6: Internet appliance test environment<br />

So far, this is all pretty standard performance, load, and reliability<br />

testing procedure. However, here’s where it gets interesting.<br />

Because a dynamic simulation had been built for design and<br />

implementation work, we were able to compare our test results<br />

against the simulation results, at both coarse-grained and finegrained<br />

levels of detail.<br />

Let’s look first at the course-grained level of detail, using an extract<br />

from our performance, load, and reliability test results presentation.<br />

We had simulation data for the mail servers, which<br />

were named IMAP1 and IMAP2 in Figure 6. The simulation predicted<br />

55% server CPU idle at 25,000 devices. Table 1 shows the<br />

CPU idle statistics under worst-case load profiles, with 25,000<br />

devices. We can see that the test results support the simulation<br />

results at 25,000 devices.<br />

Server CPU Idle<br />

IMAP1 59%<br />

IMAP2 55%<br />

Table 1: Server CPU idle figures during performance tests<br />

Table 2 shows our more fine-grained analysis for the various<br />

transactions the IMAP servers had to handle, their frequency per<br />

connection between the device and the IMAP server, the average<br />

time the simulation predicted for the transaction to complete,<br />

and the average time that was observed during testing. The<br />

times are given in milliseconds.<br />

Transaction Frequency Simulation <strong>Testing</strong><br />

Connect 1 per connect 0.74 0.77<br />

Banner 1 per connect 2.57 13.19<br />

Authorize 1 per connect 19.68 19.08<br />

Select 1 per connect 139.87 163.91<br />

Fetch 1 per connect 125.44 5,885.04<br />

Table 2: Server transactions and times, simulated and observed during<br />

testing<br />

70 The Magazine for Professional Testers www.testingexperience.com

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!