20.08.2015 Views

TestArchitect

LogiGear MAGAZINE

LogiGear MAGAZINE

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

FEATURE ARTICLE<strong>TestArchitect</strong> TMAUTOMATION TOOLSET by Hans Buwalda<strong>TestArchitect</strong> TM is the name we have given to our automation toolset. It reflectsthe vision that automated testing requires a well-designed architectural planallowing technical and non-technical elements to work fluidly in their capacity.It also addresses the continual missing link of all test automation tools of how todesign tests. In <strong>TestArchitect</strong> the test design is located in the center columnflanked by the test automation execution engine and test management.www.logigearmagazine.com Page 3 of 14 January 2011 | Volume V


FEATURE ARTICLEWhat is often misunderstood is that automatedtesting is not the same as automating manualtesting. For automated testing to be successful,tests need to be designed with theirautomation already in place. In myexperience, it is this simple missing factor that isthe source of many test automation failures.To tackle automated testing we have come upwith a vision that consists of three elements: an effective keyword-driven methodfor test design and automation calledAction Based Testing a toolset to support―<strong>TestArchitect</strong>―thathas all the features commonly found intest tools including a shared repositoryorganization and flexible web-based"dashboards" for managing testprojects. However, this article will onlyfocus on Action Based Testing aspects a set of guidelines and best practicesthat can be found at our website,www.logigear.com/hans/Do you need <strong>TestArchitect</strong> to do Action BasedTesting? Not necessarily. In fact, a simpleinterpreter written in the scripting language of atest playback tool can do the job quite well.The outline of such an interpreter would consistof:“repeatread a test linelook up the keywordexecute the function mapped tothe keywordwrite results in the result fileuntil no more test lines”<strong>TestArchitect</strong>, however, was developed tofacilitate Action Based Testing testdevelopment. It organizes the test modules andthe actions, and has tools similar to a test editorto quickly create the action based tests. Alsopopular is the feature of “action definitions”,where existing actions can be used to createnew ones without additional programming.Action Based Testing method was a conceptdevised 16 years ago as a complex yet verycritical project that is now widely used insoftware testing. The tests are clustered in “testmodules”, and are described as a series of“actions” varying from inputting a value,clicking a button or capturing and verifying avalue. The actions are named with “actionkeywords” or “action words” and havearguments for the input and expected outputvalues. Part of a test module is also a set of“test objectives” that detail the goals of thetest. Action Based Testing contains manydirections on how to determine the testmodules and the test objectives, but that fallsbeyond the scope of this article.A key concept in Action Based Testing is thatthe automation efforts don't focus on the testsbut solely on the actions as named by theiraction words. In most projects a relatively smallset of actions can be used to describe a largeset of tests. The result is that the tests arereadable for humans and at the same time arefully automated. The automation is verymaintainable, since in the case of a change inthe system under testing generally only theimplementation of one or more actions has tobe modified; most of the tests can be leftalone.www.logigearmagazine.com Page 4 of 14 January 2011 | Volume V


FEATURE ARTICLEAn added feature allows actions to be sharedacross test projects by having a project“subscribe” to another “supplier” project. Thesubscriber can then use all actions of thesupplier project freely and any project thesupplier itself subscribes to are automaticallyavailable to the subscribing project.Subscriptions can be recursive and mutual: asupplier can subscribe to yet another project, inwhich case those actions are available for itssubscribers as well, and even a supplier projectcan subscribe to one or more of its ownsubscribers.The underlying concept in Action Based Testingis to design tests explicitly and to not rely onrecording of tests. However, for practicalreasons <strong>TestArchitect</strong> does come with arecording tool called the "Action Recorder."Instead of generating scripts this tool producesaction lines that can be inserted anywhere in atest module or action definition.Another important aspect of <strong>TestArchitect</strong> istest result management. After a test run, resultsare initially stored locally. The user must take anexplicit action to store them in the repositoryvisible to others and later have an impact onmetrics and reports. This process preventsrepositories cluttered with results of thenumerous ad hoc dry-runs users superviseduring testing or automation development.Once in the repository, results are furtherorganized with proper naming or storageplacement. Lastly, results are linked to the testmodules and test cases they have tested.Automation in <strong>TestArchitect</strong> is organized as aset of libraries that work completely separatefrom the test management and test designcomponents allowing you to program theactions in virtually any programming language.It is also possible to execute the tests in a thirdparty playback tool using its scripting languageto program the actions.All in all, <strong>TestArchitect</strong> is an ecosystem fortesters to design, automate, and manage testsgrounded in its core design of Action BasedTesting. Without this method, I believe it’s justanother playback similar to its counterparts. It’smy firm belief that <strong>TestArchitect</strong> serves toimprove the quality and implementation oftests. This to me is at the very core of ourprimary objectives as testers: to write great teststhat find meaningful bugs and help improveoverall application quality.www.logigearmagazine.com Page 5 of 14 January 2011 | Volume V


RELATED ARTICLESTest design focused on expeditingfunctional test automationBy David W. JohnsonTest organizations continue to undergo rapidtransformation as demands grow for testingefficiencies. Functional test automation is oftenseen as a way to increase the overallefficiency of functional and system tests. Howcan a test organization stage itself forfunctional test automation before aninvestment in test automation has even beenmade? Further, how can you continue toharvest the returns from your test designparadigm once the test automationinvestment has been made? In this article wewill discuss the factors in selecting a test designparadigm that expedites functional testautomation. We will recommend a test designparadigm and illustrate how this could beapplied to both commercial and open-sourceautomation solutions. Finally, we will discusshow to leverage the appropriate test designparadigm once automation has beenimplemented in both an agile (adaptive) andwaterfall (predictive) system developmentlifecycle (SDLC).Test design - selection criteriaThe test design selection criteria should begrounded in the fundamental goals of anyfunctional automation initiative. Let us assumethe selected test automaton tool will enableend-users to author, maintain and executeautomated test cases in a web-enabled,shareable environment. Furthermore, the testautomation tool shall support test case design,automation and execution "best practices" asdefined by the test organization. To harvest themaximum return from both test design and testautomation the test design paradigm mustsupport:Manual test case design, executionand reportingAutomated test case design, executionand reportingData-driven manual and automated testcasesReuse of test case "steps" or"components"Efficient maintenance of manual andautomated test casesTest design – recommended paradigmOne paradigm that has been gainingmomentum under several guises in the last fewyears is keyword-based test design. I havestated in previous articles that:"The keyword concept is founded on thepremise that the discrete functional businessevents that make up any application can bedescribed using a short text description(keyword) and associated parameter valuepairs (arguments). By designing keywords todescribe discrete functional business events thetesters begin to build up a common library ofkeywords that can be used to create keywordtest cases. This is really a process of creating alanguage (keywords) to describe a sequenceof events within the application (test case)."The Keyword concept is not a silver bullet but itdoes present a design medium that leads toboth effective test case design and ease ofautomation. Keywords present the opportunityto design test cases in a fashion that supportsour previous test design selection criteria. Itdoes not guarantee that these test cases willbe effective but it certainly presents thegreatest opportunity for success. Leveraging atest design paradigm that is modular andreusable paves the road for long termautomation – not only that, it moves most ofthe maintenance to a higher level ofabstraction: the keyword. The keyword nameshould be a shorthand description of whatactions the keyword performs. The keywordname should begin with the action beingwww.logigearmagazine.com Page 6 of 14 January 2011 | Volume V


RELATED ARTICLESperformed followed by the functional entityfollowed by descriptive text (if required). Hereare several common examples:Logon User - Logon UserEnter Customer Name - Enter CustomerNameEnter Customer Address - EnterCustomer AddressValidate Customer Name - ValidateCustomer NameSelect Customer Record - SelectCustomer RecordTest design – keyword applicationKeyword test case design begins as anitemized list of the test cases to be constructed–usually as a set of named test cases. Theinternal structure of each test case is thenconstructed using existing or new keywords.Once the design is complete, the appropriatetest data (input and results) can be added.Testing the keyword test case design involvesexecuting the test case against the applicationor applications being tested.At first glance this does not appear to be anydifferent than any other method for test casedesign but there are significant differencesbetween keyword test case design and anyfreehand/textual approach to test casedesign. Keyword test case designs are:Consistent – the same keyword is usedto describe the business event everytimeData Driven – the keyword contains thedata required to perform the test stepSelf Documenting – the keyword,description contains the designers’intentMaintainable – with consistency comesmaintainabilityAutomation -- supports automation withlittle or no design transformation(rewrite)Test design – adaption based ondevelopment/testing paradigmThere are two primary development andtesting approaches being used bydevelopment organizations today: adaptive(agile) and predictive (waterfall/cascade).Both approaches certainly have theirproponents–though the increasingly adaptive(agile) system development lifecycles aregaining precedence. The question becomeshow does this affect the test design paradigm?The answer appears to be that it really doesnot affect the test design paradigm but it doesaffect the timing.Predictive (waterfall/cascade) developmentlifecycles can be supported by a straightforwarddesign, build, execute and maintaintest design paradigm that may later supportautomation. Eventually, one would expect thepredictive testing team to design, build,execute, maintain and automate their testcase inventory. This could be accomplishedusing both Tier 1 commercial automation toolsand open source automation tools. As long asthe automation tools support modular baseddesign (functions) and data driven testing (testdata sources) keyword-based automation canbe supported–the most significant differencebeing the time and effort required toimplement the testing framework. Adaptive(agile) development lifecycles come in severalflavors–some support immediate keywordbasedfunctional test design and automationwhile others do not. Agile test drivendevelopment (TDD) using FitNesse, a testingframework which requires instrumentation byand collaboration with the development team,certainly supports keyword-based test casedesign and automation. Other agile paradigmsonly support instrumentation at the unit testlevel or not at all, i.e. a separate keywordbasedtest case design and automation toolsetmust be used. The challenge for non-TDD agilebecomes designing, building, executing andmaintaining functional tests within the contextof a two to four week sprint. The solution is acombination of technique and timing. For theimmediate changes in the current sprint,consider using exploratory testers and anitemized list of test cases with little, if anycontent–basically a high-level check list. Oncethe software for a sprint has migrated to andexisted in production for at least one sprint, atraditional set of regression test cases can beconstructed using keywords. This separates thechallenge into sprint-related testing andregression testing.www.logigearmagazine.com Page 7 of 14 January 2011 | Volume V


RELATED ARTICLESKEY PRINCIPLES OFTEST DESIGNBy Hans BuwaldaTest design is the single biggest contributor to success in software testing. Not only can good testdesign result in good coverage, it is also a major contributor to efficiency. The principle of test designshould be "lean and mean." The tests should be of a manageable size and at the same time completeand aggressive enough to find bugs before a system or system update is released.Test design is also a major factor for success in test automation. This is not that intuitive. Like manyothers, I initially also thought that successful automation is an issue of good programming or even"buying the right tool." Finding that test design is the main driving force for automation success issomething that I had to learn over the years–often the hard way.What I have found is that there are three main goals that need to be achieved in test design. I like tocharacterize them as the "Three Holy Grails of Test Design”–a metaphor based on the stories of KingArthur and the Round Table as the three goals are difficult to reach mimicking thestruggle King Arthur’swww.logigearmagazine.com Page 8 of 14 January 2011 | Volume V


RELATED ARTICLESknights experienced in search of the Holy Grail. This article will introduce the three "grails" to look for intest design. In subsequent articles of this series, I will go into more detail about each of the goals.The terminology in this article and the three subsequent articles are based on Action Based Testing(ABT), LogiGear's method for testing and test automation. You can read more about the ABTmethodology on the LogiGear web site. In ABT, test cases are organized into spreadsheets which arecalled “test modules.” Within the test modules the tests are described as a sequences of "test lines,"each starting in the A column with an "action" while the other columns contain arguments. Theautomation in ABT does not focus on automating test cases, but on automating individual actionswhich can be re-used as often as necessary.The Three Goals for Test DesignThe three most important goals for test design are:1. Effective breakdown of the testsThe first step is to breakdown the tests into manageable pieces, which in ABT we call “test modules.”At this point in the process we are not yet describing test cases; we simply place test cases its itsappropriate “chapters.” A break down is good if each of the resulting test modules has a clearlydefined and well-focused scope which is differentiated from the other modules. The scope of a testmodule subsequently determines what its test cases should look like.2. Right approach per test moduleOnce the break down is done, each individual test module becomes a mini-project. Based on thescope of a test module we need to determine what approach to take to develop the test module. Byapproach, I mean the choice of testing techniques used to build the test cases (like boundaryanalysis, decision tables, etc.) and who should get involved to create and/or assess the tests. Forexample, a test module aimed at testing the premium calculation of insurance policies might needthe involvement of an actuarial department.3. Right level of test specificationThis third goal is deciding where you can win or lose most of the maintainability of automated tests.When creating a test case try to specify only the high-level details that are relevant for the test. Forexample, from the end-user perspective “login” or “change customer phone number” is one action; itis not necessary to specify any low-level details such as clicks and inputs. These low-level details shouldbe “hidden” at this time in separate, reusable automation functions common to all tests. This makes atest more concise and readable, but most of all it helps maintain the test since low-level details left outwill not have to be changed one-by-one in every single test if the underlying system undergoeschanges. The low-level details can then be re-specified (or have their automation revised) only onceand reused many times in all tests.In ABT this third principle is visible in the “level” of the actions to be used in a test module. For example,in an insurance company database, we would write tests using only “high-level" actions like “createpolicy” and “check premium,” while in a test of a dialog you could use a “low level” action like “click”to see if you can click the OK button.Regardless of the method you choose, simply spending some time thinking about good test designbefore writing the first test case will have a very high payback down the line, both in the quality andthe efficiency of the tests.www.logigearmagazine.com Page 9 of 14 January 2011 | Volume V


SPOTLIGHT INTERVIEWElfriede Dustin of Innovative Defense Technology, is an author of variousbooks on testing including Automated Software Testing, Quality WebSystems, and her latest book Effective Software Testing.LogiGear: With Test Design being an important ingredient to successful testautomation, we are curious at LogiGear Magazine how you approach theproblem? What methods do you employ to insure that your teams aredeveloping tests that are ready or supported for automation?Dustin: How we approach test design depends entirely on the testing problemwe are solving. For example, we support the testing of mission critical systemsusing our (IDT’s) Automated Test and Re-Test (ATRT) solution. For some of themission critical systems, Graphical User Interface (GUI) automated testing isimportant, because the system under test (SUT) has operator intensive GUIinputs that absolutely need to work – we approach the test design based onthe GUI requirements, GUI scenarios, expected behavior and scenario basedexpected results.Elfriede DustinOther mission critical systems we are testing have a need for interface testing,where hundreds or thousands of messages are being sent back and forthbetween the interfaces – and the message data being sent back and forthhas to be simulated (here we have to test load, endurance and functionality).We might have to look at detailed software requirements for message values,data inputs and expected results and/or examine the SUT's Software DesignDocuments (SDDs) for such data so we can ensure the test we are designing,or even the test that already existed, are sufficient to permit validation ofthose messages against the requirements. Finally, it’s of not much use if wenow have automated the testing of thousands of messages, and we haveadded to the workload of the analysis. Therefore, the message data analysisneeds to be automated also and again a different test design is requiredhere.Many more test design approaches exist and need to be applied dependingon the many more testing problems we are trying to solve, such as testingweb services, databases, and others.ATRT provides a modeling component for any type of test, i.e. GUI, messagebased,combination of GUI and message based and more. This modelingcomponent provides a more intuitive view of the test flow than mere text froma test plan or in a test procedure document. The model view lends itself toproducing designs that are more likely to ensure the automated test meetsthe requirements the testers set out to validate and reduces rework orredesign once the test is automated.www.logigearmagazine.com Page 10 of 14 January 2011 | Volume V


SPOTLIGHT INTERVIEWLogiGear: What are your feelings on the differentmethods of test design from Keyword Test Design,Behavior Driven Test Development (CucumberTool) to scripted test automation of manual testcase narratives?Dustin: We view keyword-driven or behaviordriventests as automated testing approaches,not necessarily test design approaches. Wegenerally try to separate the nomenclatures oftest design approaches from automated testingapproaches. Our ATRT solution provides akeyword-driven approach, i.e. the tester(automator) selects the keyword action requiredvia an ATRT-provided icon and the automationcode is generated behind the scene. We findthe keyword / action-driven approach to bemost effective in helping test automators and inour case ATRT users build effective tests with easeand speed.LogiGear: How have you dealt with the problemsof scaling automation? Do you strive for acertain percentage of all tests to be automated?If so, what is that percentage and how oftenhave you attained it?Dustin: Again the answer to the question ofscaling automation depends [if we] are wetalking scaling automated testing for messagebased/ background testing or scalingautomation for GUI-based automated testing.Scaling for message based / backgroundautomated testing seems to be an easierproblem to solve than scaling for GUI-basedautomated testing.We don’t necessarily strive for a certainpercentage of tests to be automated. Weactually have a matrix of criteria that allows us tochoose the tests that lend themselves toautomation vs. those tests that don’t lendthemselves to automation. We also build anautomated test strategy before we proposeautomating any tests. The strategy begins withachieving an understanding of the test regimenand test objectives then prioritizes what toautomate based on the likely return oninvestment. Although everything a human testerdoes can, with enough time and resources, beautomated, not all tests should be automated. Ithas to make sense, there has to be somemeasurable return on the investment ofautomation before we recommend applyingautomation. “Will automating this increasethe probability of finding defects?” is aquestion that’s rarely asked when it comes toautomated testing. In our experience, thereare ample tests that are ripe for achievingsignificant returns through automation so weare hardly hurting our business opportunitieswhen we tell a customer that a certain test isnot a good candidate for automation.Conversely, for those who have triedautomation in their test programs and whoclaim their tests are 100% automated my firstquestion is “What code coveragepercentage does your 100% test automationachieve?” One hundred percent testautomation does not mean 100% codecoverage or 100% functional coverage. Oneof the most immediate returns on usingautomation is the expansion of test coveragewithout increasing the amount of test time(and often while actually reducing test time),especially when it comes to automatedbackend / message testing.LogiGear: What is your feeling on the currentstate of test automation tools? Do you feelthat they are adequately addressing theneeds of testers? Both those who can scriptand those who cannot?Dustin: Most of the current crop of testautomation tools focus on Windows or Webdevelopment. Most of the existing tools mustbe installed on the SUT thus modifying the SUTconfiguration. Most of the tools are GUItechnology dependent. In our testenvironments we have to work with variousOperating Systems in various states ofdevelopment, using various types ofprogramming languages and GUItechnologies. Since none of the currentlyprovided tools on the market met our vastcriteria for an automated testing tool, we atIDT developed our own tool solutionmentioned earlier, ATRT. ATRT is GUItechnology neutral, doesn’t need to beinstalled on the SUT, is OS independent anduses keyword actions. No softwaredevelopment experience is required. ATRTprovides the tester with every GUI action amouse will provide in addition to various otherkeywords. Our experience with customersusing ATRT for the first time is that it provides avery intuitive user interface and that it is easywww.logigearmagazine.com Page 11 of 14 January 2011 | Volume V


IN BRIEF…AUTOMATION TOOLSSOFTWARE TESTING BOOKSIntegrated Test Design and Automation:Using the Testframe MethodWhat is Automated Test and Retest (ATRT)?IDT’s ATRT Test Manager addresses thecomplex testing challenges of mission criticalsystems by providing an innovative technicalsolution that solves the unique testing problemsassociated with this domain. It provides anintegrated solution that can be applied acrossthe entire testing lifecycle.Highlights of ATRT:• Non-intrusive to the System under Test(SUT) - SUT configuration remainsintact• Cross-platform and cross-OScompatible – Web, Client Server,Handheld and Linux, Windows, Solaris• GUI Technology neutral – Independentof any GUI controls (custom or noncustom)• Scriptless Automated Testing – Softwaredevelopment not required• Data driven – Allows same testscenarios to be run repeatedly withdifferent data sets, e.g. boundarytesting, equivalence partitioning• Model driven Automated Testing –Allows test flows generated via basicpoint and click model interfaceOther Features:Combining the power of GUI and non-GUIautomated testing: ATRT Test Managerprovides GUI and message based testingcapability for Systems Under Test (SUT).Distributing concurrent testing over a network:Automated tests can be executedconcurrently over a network for test caseswhere various GUI or message based outputsare network dependent network or must run inparallel.Batch Processing: For test cases requiringsimultaneous execution as a batch job, e.g.endurance or longevity testing.Editorial Review:Authors: Hans Buwalda,Dennis Janssen, Iris Pinkster,and Paul WattersPaperback: 224 pagesPublisher: Addison-WesleyProfessional, 1 edition(28 December 2001)Language: EnglishProduct Dimensions: 9.1 x7.3 x 0.6 inchesThis practical guide enables readers tounderstand and apply the TestFrame method–an open method developed by the authors andtheir colleagues which is rapidly becoming astandard in the testing industry. With the aid ofthis book, readers will learn how to * customizethe TestFrame method for their organizations *develop reusable testing standards * makeoptimum use of automated testing tools * reuseand maintain test products * IT managers willlearn how to improve the control the test processand assess results, and expert testers will learneffective ways of automating test execution in astructured way.Zero-defect software is the holy grail of alldevelopment projects, and sophisticatedtechniques have now emerged to automate thetesting process so that high-quality software canbe delivered on time and on budget. Thispractical guide enables readers to understandand apply the TestFrame method.REVIEW: "The Test Automation Bible", March 21,2002 - By David EdwardsThis exceptionally well written volume onautomated testing should be read by EVERYsoftware engineer and project manager, not justthe Q&A team. Hans Buwalda, world expert onthe TestFrame method, has written an excellentoverview which is contemporary and explainsthe Greek metaphor very clearly. I highlyrecommend the examples given in the book.www.logigearmagazine.com Page 13 of 14 January 2011 | Volume V


IN BRIEF…SUBMIT YOUR ARTICLE TO LOGIGEAR MAGAZINE:SUBMISSION GUIDELINESLogiGear welcomes submissions from new authors, QA and test engineers. We encourage the nextgeneration of thinkers and practitioners in software quality and testing to be published in LogiGearMagazine. Articles can be original works or previously posted articles on blogs, websites ornewsletters but within a relevant timeframe. Stories can take form as features, tips and hints, testinginstructions, motivational articles and testing about software testing practices and industry trends.Please note:1. Feature articles must be 500 words or more, Tips and Tricks must be 200 words or less, andTool & Technology must be 500 words or more.2. Only submissions from original authors will be accepted. By submitting this material, youhave acknowledged that the material is legal and can be redistributed (book publishers ora public relations firm will need to reference this information at the top of the article). Sucharticles will not be compensated and will not be accepted if it can be interpreted asadvertisements. However, you may place a brief resource box and contact information(but no ads) at the end of your article. Please specify if you wish to have your e-mailincluded for readers’ response.3. Due to staffing, editing of such submissions is limited and therefore send only the final draftof your work. After an article is received, it may be revised before publishing.4. Submissions may or may not be used. It is the sole discretion of LogiGear Magazine’seditorial staff to publish any material. You will be notified in writing if your article will bepublished and what edition it will appear in.For additional information about our guidelines or to submit an article, please send an email tologigearmagazine@logigear.comwww.logigearmagazine.com Page 14 of 14 January 2011 | Volume V

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!