Es gibt nur einen Weg zum Glück.
As you can see, we had good matches with connect and authorize transactions, not a bad match with select transactions, a significant mismatch with banner transactions, and a huge discrepancy with fetch transactions. Analysis of these test results led to the discovery of some significant problems with the simulation model’s load profiles and to significant defects in the mail server configuration. Based on this case study, here comes another key tip: Be careful to design your tests and models so that you can compare the results between the two. Notice that the comparisons I just showed you rely on being able to do that. Lesson 4: Invest in Tools, but Don’t Splurge Almost always, tools are necessary for performance, load, and reliability testing. Trying to do these tests manually is not an industry best practice, to put it mildly. When you think of performance testing tools, likely one or two come to mind, accompanied by visions of large price tags. However, there are many types of suitable tools, some commercial, some open-source, and some custom-developed. For complex test situations, you may well need a combination of two or three of these types. So here’s the first key tip: Rather than immediately starting with the assumption you’ll need to buy a particular tool, follow best practices in test tool selection. First, create a high-level design of your performance, load, and reliability test system, including the test cases, the test data, and the automation strategy. Second, identify the specific tool requirements and constraints for your test system. Third, assess the tool options to create a short-list of tools. Fourth, hold a set of competitive demos between the various vendors and with the open-source tools. Finally, do a pilot project with the demonstration winner. Only after assessing the results of the pilot should you make a large, long-term investment in a particular tool. An obvious but easy-to-forget point is that open-source tools don’t have marketing and advertising budgets. So, they won’t come looking for you. Instead, search the Internet, including sites like sourceforge.net, to find open-source tools. While on this note of open-source, free tools, here’s a key warning: Don’t overload on free tools. Even free tools have costs of ownership, and risks. One risk is that everyone in your test team will use a different performance, load, and reliability testing tool, resulting in a Tower of Babel and significant turnover costs. It happened to us on the interactive voice response server project, when we couldn’t re-use scripts created by the telephony subsystem vendor’s developers because they wouldn’t coordinate with us on the scripting language to be used. I want to reinforce this issue of making a tool selection mistake with a case study. It’s one client’s experience, again from before they became my client, but it has been repeated many, many times. www.testingexperience.com One of my clients attended a software testing conference, which is a good idea. They talked to some of the tool vendors at the conference, which is also a good idea. However, they selected a commercial tool based solely on a salesperson’s representations that it would work for them, which, of course, is a bad idea. When they got back to their offices, they realized they couldn’t use the tool, because no one on the team had the required skills. So, they called the tool vendor’s salesperson. Based on his recommendation, they hired a consultant to automate all their tests for them. That’s not necessarily a bad idea, but they forgot to put in place any transition plan to train staff or maintain tests as the tests were completed. Furthermore, they tried to automate at an unstable interface, which in this case was the graphical user interface. So, after six months, the consultant left. The test team soon found that they could not maintain the tests. They also found that they could not interpret the results, and had more and more false positives arising as the graphic user interface evolved. After a short period, the use of the tool was abandoned, after a total investment of about $500,000. Here’s a successful case study of test tool selection and implementation. While working on a complex system with a wide-area network of interactive voice response servers connected to a large call center, we carefully designed our test system, then selected our tools using the process mentioned earlier. We used the following tools: • QA Partner (now Silk Test) to drive the call center applications through the graphical user interface. • Custom-built load generators to drive the interactive voice response server applications. • Custom-built middleware to tie the two sides together and coordinate the testing end-to-end. We used building blocks such as the TCL scripting language for our test of custom-built test components. The overall test system architecture is shown in Figure 7. The total cost of the entire system, including the labor to build it, was under $100,000, and it provided a complete end-to-end test capability, including performance, load, and reliability testing. Figure 7: Good test system architecture The Magazine for Professional Testers 73