23.03.2013 Views

Agile Performance Testing - Testing Experience

Agile Performance Testing - Testing Experience

Agile Performance Testing - Testing Experience

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tially desired test script shown in the picture below.<br />

Figure 6 – Final Test Script<br />

4. Final Considerations<br />

During the development of our work, several constraints could<br />

be observed that could affect the full utilization of our approach.<br />

We could also point out several enhancements and studying opportunities<br />

for developing the described approach further and<br />

make it a real alternative to all existing performance testing techniques.<br />

These are some of the constraints and opportunities identified:<br />

• Limited feature model representation possibilities. During<br />

domain analysis performed to support the creation of derivation<br />

models, some details needed to be left out in order to<br />

facilitate an easier and faster understanding of the tool to<br />

create new performance test script instances. On the other<br />

hand, some features like simulating user abandonment for<br />

load tests, multiple threads or multiple virtual users groups,<br />

among others, could not be modelled in a way that would<br />

produce a correct derivation, restricting its use to the manual<br />

customization task performed in the last step of our approach.<br />

Future enhancements on our work may successfully<br />

model these features.<br />

www.testingexperience.com<br />

Biography<br />

• Reuse between applications and test type. During the development<br />

of our research, we detected that the reuse of test<br />

scripts may occur among different types of performance<br />

tests and also among different applications. Regarding performance<br />

test types, our work concluded that different performance,<br />

load or stress test scripts vary mostly with regard<br />

to configuration parameters, but generally present a similar<br />

structure. Besides that, we observed that the test scripts of<br />

different applications share several common elements, allowing<br />

a greater reutilization of element definitions.<br />

• Approach usage with other non-functional test types. We<br />

also identified that the defined approach could also be used<br />

for deriving test scripts for other non-functional requirements,<br />

like security, reliability, etc. With regard to security,<br />

for example, it is possible to apply our approach, even including<br />

Jmeter, to simulate scenarios that involve resources exhaustion<br />

to identify possible security failures related to this<br />

condition.<br />

• Eliminating customizations and adaptations through approach<br />

restructuring. A research extension with the purpose<br />

of identifying alternatives is to look into possibilities to make<br />

it possible to eliminate the task of adapting the generated<br />

script for correct execution. This would contribute for the<br />

removal of the recording operation for transactions that are<br />

not included in the derivation tool, which would make the<br />

script generation a lot easier.<br />

• Automatic model derivation through test scripts. This is<br />

about defining a way of adapting the existing annotations<br />

feature, already used by GenArch, to enable an automatic<br />

model derivation from a group of test scripts. This could reduce<br />

the amount of work that currently needs to be done in<br />

script analysis and model creation, which in our research<br />

were done entirely manually.<br />

In our work we defined an approach for the derivation of performance<br />

test scripts for web applications, which is composed of six<br />

steps. This has the main goal of deriving new test script instances,<br />

but also has the objective to allow an easy way of adapting these<br />

test scripts to other test types and scenarios.<br />

In this article, the approach developed during a software engineering<br />

master course accomplished at C.E.S.A.R.(Recife Center for<br />

Advanced Studies and Systems) institute was described in some<br />

detail. For further information regarding the results achieved<br />

and/or any other questions, please contact one of the authors.<br />

José Carréra, MSc, has been test engineer at C.E.S.A.R. (Recife Center<br />

for Advanced Studies and Systems) since 2006 and Professor of<br />

Computer Science at the Faculdade de Tecnologia de Pernambuco,<br />

Brazil, since 2010. He holds a master degree in software engineering,<br />

graduated in computer science, and he is a Certified Tester — Foundation<br />

Level (CTFL), certified through the ISTQB (International Software<br />

<strong>Testing</strong> Qualifications Board). His research interests include<br />

performance testing, exploratory testing, agile methodologies and<br />

software quality in general.<br />

Uirá Kulesza is an Associate Professor at the Computer Science<br />

Department (DIMAp), Federal University of Rio Grande do Norte<br />

(UFRN), Brazil. He obtained his PhD in Computer Science at PUC-Rio -<br />

Brazil (2007), in cooperation with University of Waterloo and Lancaster University. His main research interests include: aspect-oriented<br />

development, software product lines, and design/implementation of model-driven generative tools. He has co-authored over 70<br />

referred papers in international conferences, journals and books. He worked as a research member of the AMPLE project (2007-2009)<br />

- Aspect-Oriented Model-Driven Product Line Engineering (www.ample-project.net).<br />

The Magazine for Professional Testers<br />

57

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!