20.08.2015 Views

LogiGear

February 2013 – The Rapidly Changing Software Testing ... - LogiGear

February 2013 – The Rapidly Changing Software Testing ... - LogiGear

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

SHOWCASING THOUGHT LEADERSHIP AND ADVANCES IN SOFTWARE TESTING<strong>LogiGear</strong> MAGAZINEThe Rapidly Changing Test Land scapeDeploying the Cloud? Make 2013the Year to Do It RightPete SchmittService Virtualization Rises tocloud app testing challengeMichael VizardWhat the Cloud CouldBe Like in 2013?Mandira SrivastavaThe Changing Landscape of Software TestingBy Michael HackettFebruary 2013 | Volume VII | Issue 1


Letter from the EditorEditor in ChiefMichael HackettManaging EditorBrian LetwinDeputy EditorJoe LuthyWorldwide OfficesUnited States Headquarters2015 Pioneer Ct., Suite BSan Mateo, CA 94403Tel +01 650 572 1400Fax +01 650 572 2822Viet Nam Headquarters1A Phan Xich Long, Ward 2Phu Nhuan DistrictHo Chi Minh CityTel +84 8 3995 4072Fax +84 8 3995 4076Viet Nam, Da Nang7th Floor, Dana Book building76-78 Bach DangHai Chau DistrictTel +84 511 3655 33Fax +84 511 3655 336www.logigear.comwww.logigear.vnwww.logigearmagazine.comCopyright 2013<strong>LogiGear</strong> CorporationAll rights reserved.Reproduction without permission is prohibited.Submission guidelines are located athttp://www.logigear.com/magazine/issue/news/editorial-calendar-andsubmission-guidelines/Change is constant. What’s different today is the rate ofchange. Moore’s law resulted from the observationthat that the rate of change in computing power isexponential. The products, services and software landscapeappears just as dynamic. At the same time, we pretty muchtake for granted the ubiquitous presence of software runningour lives and the conveniences and ease it brings.The landscape of product development is constantly shifting—mobility is everything and everything thinks. From medicaldevices to toasters, from radio becoming Pandora, from cash,checks and credit cards becoming online bill pay becomingmPay— everything is run by software, and more software with logic capability is beingdeployed. The technologies landscape too is shifting— the Cloud, SaaS, virtualization,cross-platform/non-platform specific mobile apps to HTML5.Many commentators on technology today state the PC is dead. How will this impact softwaredevelopment and how can test teams be best prepared for this new landscape? Forstarters, testing cannot remain the same. Software development today is more agile, lessdocumented, faster and even more distributed—but that is last decade’s news. Testinghas lagged software development but the future demands the lag must be reduced exponentially.This doesn’t mean that as testers we are behind the curve, there are teamsalready running tens of thousands of automated tests on hundreds of virtual environmentsagainst daily builds for global financial security systems. There are many very sophisticatedtest teams. Yet, even their landscape is changing as products and serviceschange.This issue of <strong>LogiGear</strong> Magazine examines the seismic shifts changing how we do productdevelopment and specifically how we test software. The landscape has changed.What we test has changed—how do we keep up? As much as we are looking at globaltrends, some people are already there.This specific issue also reminds me of the mission of <strong>LogiGear</strong> Magazine. We want toprovide a forum for learning new ideas, methods, practices, state-of-the-practice in softwaretesting and a view into the broader landscape of software testing. I tend to focus onthe project ahead of me and sometimes lose sight of the broader horizon. In this issue,Mandira Srivastava tells us why 2013 is the year of the cloud; I will discuss the changingsoftware testing landscape; Pete Schmitt shows us how to leverage the cloud across yourorganization; Michael Vizard discusses how to get around issues of scale using virtualization;and Adrian Woodhead reviews the book, How Google Tests Software.Since we moved in to a Wordpress platform last year, we have built an archive of our pastissues and also cross referenced all our content to be searchable on keywords for researchon specific topics.Also, Happy New Year. Happy 2013 and Year of the Snake in the lunar calendar. TheSnake is intuitive, introspective, and refined. He can appear cunning and reticent andworks very modestly in the business environment. The Snake will plot and scheme tomake certain things turn out exactly as they want them to. Hmm. Sounds like manytesters I know! Happy New Year. ■Michael HackettSenior Vice President, <strong>LogiGear</strong> CorporationEditor in ChiefW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


3I n thi s Issue4 I N T H E N E W S5 W H A T C O U L D T H EC L O U D B E L I K E I N2013?Mandira SrivastavaCloud computing has beenhelping business enterprisesdeliver services faster andcheaper compared to all otherexisting delivery models. 2013 willbe the year when the technologyis adopted on a larger scale.7 T H E C H A N G I N GL A N D S C A P E O FS O F T W A R E T E S T I N GMichael Hackett,<strong>LogiGear</strong> CorporationThe landscape of softwaretesting is undergoing a fast anddramatic change, driven by thesocietal, business andtechnology changes that haveplaced software everywhere.16 D E P L O Y I N G T H EC L O U D ? M A K E 2 0 1 3T H E Y E A R T O D O I TR I G H TPete Schmitt, Customer StorageBy thoughtfully and strategicallypreparing for future cloudopportunities, your organizationcan address these concernsand fully leverage the benefitsof cloud technology across theenterprise.1 8 S E R V I C EV I R T U A L I Z A T I O N R I S E ST O C L O U D A P P T E S T I N GC H A L L E N G EMichael Vizard, IT Business EdgeService virtualization provides a way toget around that scale issue that is notonly practical, but more importantlyshould lead to better code beingdeployed the first time every time.1 9 S O F T W A R E T E S T I N GL A N D S C A P E G L O S S A R YSome of the terms used whendiscussing the cloud, virtualizationand SaaS.20 B O O K R E V I E WAdrian Woodhead, Last.fmA review of How Google TestsSoftware by James Whittaker, JasonArbon and Jeff Carollo.2 2 V I E T N A M V I E W :W H A T T H E P H O ?A L O O K A T V I E T N A M ’ SS I G N A T U R E D I S HBrian Letwin, <strong>LogiGear</strong> CorporationWhile Vietnam is known for its regionalculinary specialties, there’s one dishthat transcends geography and canbe eaten at any time of day.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


I n the N ew s4VISTACON 2013 on The HorizonVISTACON 2013 will continue to bring top experts and professionals toVietnam to share the latest and greatest developments in SoftwareTesting. With the success and energy from VISTACON 2011, expectationsfor the conference have risen. So this year, VISTACON 2013 will becovering the latest trends on software testing, including mobile testing,cloud testing, test automation and testing complex and big systems.The conference, which will take place on April 25 and 26 at the Majestic Hotel in Ho Chi Minh City, will featureindustry thought-leaders such as Julian Harty, BJ Rollison and Hans Buwalda. Visit www.VISTACON.vn formore information and special offers on ticket pricing.Test Engineer of 2013 AnnouncedOn Tuesday, January 29, Peter Spitzer was awarded Test & MeasurementWorld’s Test Engineer of the Year Award for 2013. A memberof the TRICENTIS team since 2008, Peter is currently the Team-Lead in TRICENTIS Test.He is responsible for the quality assurance of the TOSCA Testsuite,including creation/planning and implementation of test conceptsand reporting the test results to all stakeholders.He is also responsible for creating, implementing, and optimizing the company’s test portfolio, which is whatled him to this award. Peter flew all the way from Austria to join the awards ceremony, and was commendedon his achievements and his willingness to travel halfway around the world to celebrate the award!Read here some of the questions he answered during an interview with EDN Network.Banks to Experience More Software Glitches This YearThe increasing complexity of banking software is resulting in a numberof serious software faults. According to Lev Lesokhin, strategychief at New York-based software analysis firm Cast, the main problemis that while developers are good at building new functions,they are bad at ensuring nothing goes wrong when new software isadded to the old.In only the last year, these types of faults have resulted in majorhiccups for companies such as Paypal, Knight Capital, Bats GlobalMarkets and Ulster Bank.“Modern computer systems are so complicated you would need to perform more tests than there are stars inthe sky to be 100% sure there were no problems in the system,‖ explains Lev Lesokhin. ―This means no singleperson, or even group of people, can ever fully understand the structure under the key business transactionsin an enterprise. Testing alone is no longer a viable option to ensure dependable systems.‖W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


5“Blogger of the MonthWhat Could the Cloud Be Like in 2013?2013 will be an exciting year for business enterprises that rely on the cloud.By Mandira SrivastavaCloud computing has been the buzzword in the worldof Information Technology for quite some time andit is likely to retain that status in the coming years.Cloud computing has been helping business enterprisesdeliver services faster and cheaper compared to all otherexisting delivery models.Small and medium business enterprises have changed theiroutlook towards technology substantially thanks to the implementationof cloud computing in their core businessmodels. It has brought down infrastructure costs like noother technology has done before, not to mention the easy(remote) access features that have developed in criticalbusiness models.The future looks even more promising than the current scenario.The world as we know it always demands more andwe can be sure that we will be seeing many more implementationsof cloud computing in as many discrete operationalareas as possible. Let’s look at what could be thefuture of cloud computing in 2013.Speed could be a decisive factor for next generation cloudcomputing technologies. Depending upon network bandwidthand speed, cloud based services could be deployed atrates never before imagined. The benefit will be for endcustomers who rely on cloud-based bottom line services.They would get speedy service rendering resulting in timeand cost savings if we take into account charges for servicetime.Collaboration between the Privateand Public clouds could bethe biggest trend that we witnessin 2013 as enterprises are lookingfor new ways to help theircustomers experience the bestof both worlds.A hybrid cloud strategy that providesthem with the functionalityand reliability of both is thus avery interesting prospect.Integration of mobile and cloud computing could becomeeven stronger with mobile applications calling out to backendservices resting on cloud-based platforms. Seamlessservice delivery will happen from anywhere on the planet.Collaboration between the Private and Public clouds could It is high time people let go of the myths in security regardingcloud computing. 2013 may very well see the end ofbe the biggest trend that we witness in 2013 as enterprisesare looking for new ways to help their customers benefit such myths as cloud delivery platforms have become”from the best of both worlds. A hybrid cloud strategy that more secure than ever before with state of the art securityprovides them with the functionality and reliability of both is firewalls, physical protection and the security of data centershosting these cloud platforms.thus a very interesting prospect.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


6Nothing makes your business more portable than cloudcomputing. It would be possible to port your data and applicationsanywhere in the world and all you need is a computerconnected to the internet. Backup and recoverymeasures might be fully loaded onto cloud based platformsthanks to the convenience offered by it; case in point, GMOCloud’s add-on services on top of their IaaS offering.Software development companies will be stressing the importanceof the cloud like never before. They can get teamsof engineers to work together from every corner of the globeand this kind of collaboration would help develop softwarecomponents quickly and efficiently.Bring your own device or BYOD is going to be the talk of thetown in 2013. We are seeing a paradigm shift in work culturewhere the freedom and power of IT are actually givento users via a web based interface. No longer is there demandfor complex hardware for working on complex softwareproducts.Open source cloud computing platforms may see a wholenew dimension of demands as they have demonstratedtheir ability to rival proprietary competitor platforms.They’ve also shown their flexibility in supporting a plethoraof services, not to mention the huge support they offer toanyone using it courtesy of its open source tag.As far as service delivery is concerned, Software-as-a-Service (SaaS) models will continue to improve and providefaster and less expensive user experiences via simpler interfaces.The number of platforms that can be used to accesssoftware as a service would increase withsmartphones embracing cloud based applications as wementioned earlier.All in all, 2013 will be an exciting year for business enterprisesthat rely on the cloud. ■About MandiraMandira Srivastava is a fulltime freelancewriter who specializes in technology,health and fitness, politics,and financial writing. Equipped with adegree in mass communication, shehas worked for both private and corporateclients.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


C over St or y7The Changing Landscape of Software TestingBy Michael Hackett, <strong>LogiGear</strong> CorporationIntroductionEverything changes. It’s the only constant. The landscapeof software testing is undergoing a fast anddramatic change driven by societal, business andtechnology changes that have placed software everywhere.Computing is ubiquitous. There is hardly anything todaythat doesn’t contain a CPU or information transmissioncapability, with software to drive it. From smart toasters topassports with microchips that log and share information,and more on the way every day as technologies are crossingand blending. The thinking insulin pump delivers correctdoses of life-saving insulin and wirelessly sends reports toyou and your doctor’s phone and orders a refill when suppliesget low. Just a few years ago, who would have thoughttest strategy for an insulin pump would need to includecommunication, security and performance testing?In a relatively short time, financial transactions went frompaper checks to online bill pay and now to mPay (mobilepayments). The technologies that have enabled such accelerationin product development have changed the testinglandscape, dramatically impacting testers and testing.I feel the earth - move - under my feetI feel the sky tum-b-ling down - tum-b-ling down- Carol KingSoftware testing skills, training, tools and process havetraditionally lagged behind the development of new technologiesand products for software programming. Even aseveryone seems to have become Agile, there are frameworksfor project management (such as Scrum) and manyrecommendations for development practices like eXtremeProgramming (XP), but no common test practice referenceto keep up with these more dynamic practices. This articleexamines some of the larger business trends that impactour products and delivery. We will examine some technology,process and mega-trends to better prepare you for thenew normal in software testing. These trends will impactyour work. The first step in projecting these impacts is understandingthe changes. Let’s begin!Where are we now?The Business of Software DevelopmentThe software development world is driven by business andenabled by technology. Already we take for granted whatseemed so new just a few years ago.From the beginning of software development, testing wasthought of as a necessary evil that needed to be fasteralways– deliver better quality to customers and keep costsdown. This idea is not new. Yet the speed and necessity ofthese are ever–increasing: Faster SDLCs, faster productdistribution. Even recently from CD distribution to downloadableproduct to App download and SaaS, and evenmore distributed teams and need for high-volume test automationto meet these demands.Many people think the move to Agile development practicesis a recent shift in the testing landscape. The Agile manifesto,the document that starting it all, was written in2001— that’s 12 years ago. Scrum was presented publiclyas a framework for software development in 1995—it isnow 17 years old. No longer new. Many companies havemoved past Scrum to Kanban—which is a newer SDLC but itis just a digital application of a process introduced in 1953.While Agile and Scrum are not new, there have been significantchanges in product development coupled with Agile.Offshoring and outsourcing Agile development, which wewill refer to as distributed Agile, helps maximize efficiencyand lead to faster product development. These changingW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


8practices are some of the many pressures on test teams toautomate as much testing as possible. What has recentlychanged the landscape of software testing is the full expectationof teams that test software to be Agile, distributetasks for best time and cost efficiency and accomplish highvolumetest automation simultaneously. Doing this seamlesslyand fast is no longer an expectation. It’s a given. It’sthe new normal, and it’s not easy. It requires great communication,ALM tools, expert automation frameworks andhighly skilled teams. Yet making Agile, distributed testingand test automation effective and efficient while reducingwork stress and working at a sustainable pace is still out-ofreachfor some teams.Game ChangersThere are some big trends in business, technology andproducts but we need to focus on the disruptive innovations.Not progress or direction adjustors - they are importantas well, but wholesale change - the big trends thatwill impact our work the most and need the most preparation.These trends are different than adding a new technology;for example, moving from a mobile optimized website to anative app or moving to HTML5. Testers will need newskills and perhaps new tools for testing HTML5. This is progress.There will always be technical progress, new languages,platforms, operating systems, databases and tools.Progress is not a game changer.Disruptive innovation is an innovation that creates a newmarket by applying a different set of values, which ultimately(and unexpectedly) overtakes an existing market. (E.g.:lower priced smart phones). A technology disruptive innovation,or game changer, was moving from procedural to objectoriented programming. It abruptly and radically restructuredsoftware development and it hasn’t been the samesince.A disruptive innovation is an innovation that helps create anew market and value network and eventually goes on todisrupt an existing market and value network (over a fewyears or decades), displacing an earlier technology. Theterm is used in business and technology literature to describeinnovations that improve a product or service in waysthat the market does not expect, typically first by designingfor a different set of consumers in the new market and laterby lowering prices in the existing market.Just a few years ago, Salesforce.com created disruptive innovation.Salesforce.com’s logo showing ―No software.‖ tellsthe story. People initially thought of SalesForce as a softwarecompany, but the fact that it’s really a service company andproduces software as a service is now better understood.This leads us to the first mega-trends or disruptive innovationchanging the landscape of software testing:1. The Cloud and SaaSMuch has been written about SaaS and how the Cloud istaking over. We will focus on how both of these technologiesare changing the landscape of software testing. NISTdefines cloud computing as a model for enabling convenient,on-demand network access to a shared pool of configurablecomputing resources—for example, networks, servers,storage, applications and services—that can be rapidlyprovisioned and released with minimal management effortor service provider interaction.Often, Cloud Computing and Software as a Service (SaaS)are used interchangeably, but they are distinct. Cloud computingrefers to the bigger picture. It is the broad concept ofusing the internet to allow people to access enabled services—Infrastructure as a Service (IaaS), Platform as a Service(PaaS), and Software as a Service (SaaS). Cloud computingmeans that infrastructure, applications and businessprocesses can be delivered to you as a service, via the Internetor your own network.Software as a Service (SaaS) is a service within Cloud Computing.In the SaaS layer, the service provider hosts thesoftware so you don’t need to install, manage, or buy hardwarefor it. All you have to do is connect and use it. SaaSexamples include customer relationship management as aservice (www.thebulldogcompanies.com/what-is-cloudcomputing).2. VirtualizationDisruptive InnovationVirtualization (or virtualisation) is the simulation of the softwareand/or hardware upon which other software runs. Thissimulated environment is called a virtual machine (VM).There are many forms of virtualization, distinguished primarilyby the computing architecture layer. Virtualized componentsmay include hardware platforms, operating systems(OS), storage devices, network devices or other resources.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


9Virtualization can be viewed as part of an overall trend inenterprise IT that includes autonomic computing, a scenarioin which the IT environment will be able to manage itselfbased on perceived activity and utility computing, whereincomputer processing power is seen as a utility that clientscan pay for on an as-needed basis. A primary goal of virtualizationis to centralize administrative tasks while improvingscalability and overall hardware-resource utilization.With virtualization, several operating systems can be run inparallel on a single central processing unit (CPU) to reduceoverhead costs—not to be confused with multitasking whereseveral programs run on the same OS.Using virtualization, an enterprise can better manage updatesand rapid changes to the operating system and applicationswithout disrupting the user. Ultimately, virtualizationdramatically improves the efficiency and availability of resourcesand applications in an organization. Instead of relyingon the old model of ―one server, one application‖ thatleads to underutilized resources, virtual resources are dynamicallyapplied to meet business needs without any excessfat. (ConsonusTech).Simulating software and hardware environments haschanged test execution and the entire landscape of softwaretesting.3. Ubiquitous ComputingI prefer the term coined at MIT—Things That Think.Ubiquitous computing (ubicomp) is a post-desktop model ofhuman-computer interaction in which information processinghas been thoroughly integrated into everyday objectsand activities. In the course of ordinary activities,someone "using" ubiquitous computing engages many computationaldevices and systems simultaneously, and maynot necessarily even be aware that they are doing so. Thismodel is considered an advancement from the older desktopparadigm. The more formal definition of ubiquitouscomputing is - "machines that fit the human environmentinstead of forcing humans to enter theirs."There is an explosion of things that think, powered by complexalgorithms that can process huge amounts of data innear real-time and provide instantaneous results. Smartdevices with embedded software systems range from trafficlights and watering systems to medical devices, cars anddishwashers, equipped with scanners and sensors, vision,communication and object-based media (self-aware contentmeeting context-aware consumer electronics). Increasingsensor density is enabling massive data gathering fromeveryday objects. At the consumer electronics show, CES, itwas projected 350 million IP devices will ship in 2013.Computing is ubiquitous, everything not only thinks, butcollects data, communicates and makes decisions for you.And all of these thinking devices are ready to be tested!Big data is a collection of data sets so large and complexthat they become difficult to process using on-hand databasemanagement tools or traditional data processing applications.The challenges include capture, curation, storage,search, sharing, analysis and visualization. The trendto larger data sets is due to the additional information derivablefrom analysis of a large, single set of related data, ascompared to separate smaller sets with the same totalamount of data, allowing correlations to be found to "spotbusiness trends, determine quality of research, preventdiseases, link legal citations, combat crime and determinereal-time roadway traffic conditions."Some of the often mentioned applications of big data aremeteorology, genomics, connectomics, complex physicssimulations, biological and environmental research, internetsearch, finance and business informatics.Examples from the private sector: Wal-Mart handles more than 1 million customer transactionsevery hour, which is imported into databasesestimated to contain more than 2.5 petabytes of data—the equivalent of 167 times the information containedin all the books in the US Library of Congress. Facebook handles 50 billion photos from its user base. FICO Falcon Credit Card Fraud Detection System protects2.1 billion active accounts world-wide.Imagine if you could overlay your blood pressure historywith your calendar and discover your most stressful meetings.4. Big DataBy now we have all heard the phrase big data. What does itmean? And, more important to us – how will it affect mywork?In the future, there will be more pressure sensors, infrared,NFC (Near Field Communication, mobile payments ormPay). Other sensors will be added, creating new services -automakers adding new features taking advantage of sensordata to help you park, for instance. Progressive insur-W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


10ance has Pay As You Drive feature, lower rates if you shareGPS data.Testing in the New Landscape5. MobilityI know. Everything is mobile. You may think it last year’snews, but it is generally agreed we are just at the beginningof the mobility tidal wave.Apple launched the iPhone in 2007 and now 65% of mobilephone use time is in non-communications activities as theeco-system is becoming less focused on voice communication.People are using mobile phones more and more ascore pieces of hardware. Shawn G. DuBravac, chief economistand research director for the Consumer ElectronicsAssociation states, ―The smartphone has become the viewfinderof your digital life.‖ The unexpected consequencesare that devices and tablets are becoming hub devices: TVremotes, power notebook computers, blood pressure monitors,etc. Smartphones are now in 52% of U.S. households;tablets in 40% of U.S. households. The traditional PC isdead. Long live the tablet. We are testing in the post-PC era.If we pause here and look at testing implications, ABI Researchindicates revenues from mobile application testingtools exceeded $200 million in 2012. About three-fourthsof the revenue was from sales of tools that enhance manualtesting, but in the coming years, the market will be drivenpredominantly by test automation. Growth in the automationsegment will push revenues close to $800 million bythe end of 2017.Big changes are happening in the software and hardwareproduct development world. To examine how these thingsare changing testers, testing and test strategy, we’ll use theSP3 framework. This is a convenient framework to examinewho tests, the test process and what methods and tools areused.($200 Million Mobile Application Testing Market Boosted byGrowing Demand for Automation London, United Kingdom -22 Oct 2012).6. Changing Flow of InputDozens of companies are working on gesture and voicebased controls for devices – gesture, not just touch. Hapticcomputing with touch sensitivity has already been developedfor applications from robotics to virtual reality.Language understanding, speech recognition and computervision will have application in an ever-increasing number ofproducts. We already have voice activated home healthcare and security systems with remote healthcare monitoring.Most of us think voice integration for cars is just for calls.Google’s driverless car logged 483,000km/300,000mileslast year without an accident, and it’s legal in a few USstates. It will be important for virtual chauffeurs to reliablyunderstand verbal directions—in many languages and dialects.Would you feel safe being driven in a car 100% operatedby software that was tested by someone like you, withyour skill set?The tools and software to test these types of applicationswill be very different than what we know today. They arelikely to be graphical applications much like today’s videogames to be able to adequately test the input devices.Components of a Test Strategy – SP3A test strategy has three components that need to worktogether to produce an effective test effort. We developed amodel called SP3, based on a framework developed byMitchell Levy of the Value Framework Institute. The strategy(S) consists of:People (P1) – everyone on your team.Process (P2) – the software development and testprocess.Practice (P3) – the methods and tools your teamemploys to accomplish the testing task.Note: We need to make some generalizations here aboutthe state of software testing. While some organizations arenow automating 100% of their 10,000 or 100,000 testcase regression, some organizations have 100% manualregression of pick and choose coverage. For more automationtrends, check out question ―O1‖ in our 2010-2011survey results on automation testing.There are teams already testing Google cars. Some teamsdo all the performance and security testing they need.There are teams already on the cutting edge, though manyare not. The following descriptions may already be true foryou, I am writing in global terms.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


11People - Skill, distributed/offshore tasks and trainingSkillsThe shift that is happening with people and skills today is aresult of ubiquitous computing and Agile development practices.The variety of skills test teams need today has expandedto the point that hiring traditional testers does notcut it in software development any longer. For this reason,many organizations have had to go offshore to find availableresources for projects.Increased skills are needed for all testing facets. New devices,platforms, automation tools, big data methods, andvirtualization all require new levels of expertise. Today, performanceissues with virtual machines (VMs) running androidemulators is a persistent problem. Automating thetest cases to run against the emulators on these VMs willrequire talented experts to sort out current issues andmaintain usable performance.We know distribution of testing tasks is successful and, atthe same time, needs more management oversight, greatcommunication infrastructure, increased use of ALM tools,and cross-cultural training.The Agile/Scrum move to co-located teams, balancedagainst both the cost and availability of skilled staff, is stillplaying out on most teams. What has been successful, andwhat I would advise any team looking for staff optimization,is to first distribute test automation tasks. The level of expertiseneeded to engineer new test automation frameworkscan be successfully separated from test design andjump start automation.SMEs (Subject Matter Experts)Not only do testers need to become more technical, allmembers of the team need to increase subject matter ordomain knowledge. Subject matter experts are still essentialbut they need to work together with other, more technicaltesting staff on test case design for automation, dataselection and additional tasks such as flushing out UserStory acceptance criteria and bug advocacy. Ultimately,super specialized test teams will evolve—with domain testersat one end and niche technical testers at the other.Many Agile practices espouse cross-functional teams. Whilethis is difficult or impossible for the vast majority of teams,testers need to grow their cross-functional skills to do morethan just execute test cases! Testers need to be multifunctional,whether that is helping with UI design, codereview,automation engineering or help/documentation.Many organizations today think of testers as software developerswho test.Fully DistributedInterpersonal SkillsWith the blending of work and home life- working hours tooverlap distributed teams, working in bull-pens/cross functionalproject team areas—people working from home, workingat a sustainable pace is a key eXtreme Programmingpractice.In the new software development paradigm, there is amuch closer relationship with product owners, customers,business analysts and testing. Development organizationsalways need more teamwork, seamless communication,trust, respect and all the other people skills that are essentialin a highly technical, distributed environment. Soft skilldevelopment is essential, not optional, to thrive in the rapid,distributed development world. Skills include: leadingtechnology workers, cross-cultural communications andusing ALM tools.TrainingLean development as a software practice is gaining manyfollowers today. One of the 7 Lean Principles is to amplifylearning. With the rate of technology, product, and developmentpractice change, it’s a no-brainer that there is a constantneed for training. For an in-depth view of training intesting, read our February, 2012 Training issue.ProcessWe live with global products and teams that are assembledfrom a globally distributed workforce. Skill, availability, cost,continuous work/speed—these are all driving organizationsto distribute tasks for optimal execution.Team ProcessSoftware development processes/SDLCs have changed inthe past 10 years like never before in the history of softwaredevelopment. The shift toward more metrics, more dashboardsand better measurement prevalent in the late 90sW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


12and early 2000’s has totally reversed. Now the focus is onleaner processes with the best metric of team success beingthe delivery of working software. The only metric in Scrum isburndown, velocity is secondary, and bug counts may bemoot and turned into user stories. This may be extreme foryour organization, but it is definitely the movement.Joint Ownership - Ownership of QualityIt has been a long time since people who test gladly got ridof the QA (quality assurance) job title. However, some unawarepeople continued to believe in those who test guaranteedquality rather than measure it, report risk, find bugs, orany other achievable tasks. The Agile movement has finallydone away with the illusion with more Teams today espousingjoint ownership of quality. There are more upstreamquality practices: code review, unit testing, CI, better userstory and requirements from need for customer satisfactionand closer communication with test teams. Following theScrum framework, most teams now have the Product Owneras the ultimate owner of quality.SpeedThe speed of development today is a changing process.The pressure to deploy rapidly and simultaneously on multipledevices or platforms— the ease of distribution, marketpressure, cost pressure, do more with less pressure— ispushing some teams to abandon or reduce even some universallyaccepted practices, such as hardening or releasesprints, to accommodate the need for speed. Abandoningprocess for speed is as old as software development itself,but the pressure is greater today. This is risky when testinglarge and complex systems where test teams need highvolumetest automation and excellent risk reporting skills.SaaS/CloudSoftware development has been greatly impacted by theuse of ALMs (application lifecycle management tools). Thisis not the place to detail ALMs, (I went into great detail onthe subject in a past article) except to say that their usetoday is bigger than ever and growing. That these tools aremore integrated with testing, automation and test managementis new.There are evolving process changes in the Cloud, particularlyin SaaS. With the immediacy of deployment, rapid change,and ever-smaller or non-existent architecture, some Agileteams cannot keep up. With the speed SaaS products aremade available to all customers, patches, hot-fixes or rollbacksare particularly problematic. As always, large-scale testautomation is a team’s most important defense.ALMs can greatly facilitate communication with offshoreteams. Continuous Integration (CI) is a pervasive XP practicetoday. The best ALM tool suites have complete CI tooling(source control, unit test execution, smoke test/UI/automated build validation test execution, autobuild, traceability,and reporting) or easy integration for these.Test ProcessTesting often lags behind development in most organizationsand test teams are still responding to Agile. Many testteams are struggling in ScrumBut processes. Most tests aretrying to prevent ―handoffs‖ and waste rework as well asbeing flexible and responsive. As noted above, test teamsare still figuring out test practices on Scrum teams and inXP practices. This has been the subject of many articles at<strong>LogiGear</strong> Magazine.The biggest test process impacts, or game changers totesting in Agile/XP, are CI and the need for high volume testautomation. These are detailed in the following ―Practices:Methods and Tools” section.Practice - Methods and ToolsMethodsThis area is leading the most dramatic changes in softwaretesting.Massive Automation– Smoke Tests and Regression TestsThere are two foundation practices that have had a profoundimpact on the testing landscape. Continuous Integration—mistaken sometimes to mean only auto build—is theautomatic re-running of unit tests, the test team’s smoke orUI automated tests, and reporting changed code, fixed bugsand automation results. This process is significant to thenew Agile speed of development, requiring rapid release,and is dependent on test teams having significant smoke orbuild validation tests as well as traceability. Using an ALMsuch as TFS (Team Foundation Server), for example, thathas all the pieces for CI built-in, the reporting can detail thecode changes and the bug fixes. Gone are the days of gettingbad builds, or good builds with no idea what changed.The test team must have a great automated smoke test tohave a true CI practice.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


13Regardless of what other practices you do, or advancesyour team is making to keep up with the new paradigms insoftware development, you cannot succeed without massiveautomation. As we know, old, high-maintenance, random,automation methods will not work today. Automatedregression tests are not only for speed, they are necessaryfor confidence, coverage, focus, information and to free usup to do more exploratory discovery. This last point is oftenlost in the new Agile development paradigms. With reduceddesign time and vague user stories, test teams need todiscover behaviors (intended or unintended), examine alternativepaths and explore new functionality. No one has thetime to do this without massive automation.We also know that scripted, old style automation does not work.I suggest you read Hans Buwalda’s article on Action basedTesting for the most advanced methods for test automation.The Cloud and SaaSThe CloudThe biggest impact of SaaS on test methods is the necessityand growing importance of performance, load, security,and scalability testing, in addition to having our hands fullwith functionality testing. Where load, performance andscalability testing have often been postponed due to themethod’s reputation of being expensive, slow, nice-to-have,optional, etc., there needs to be crucial discussion beforecompromising security testing.The value proposition behind SaaS is: let ―us‖ manage theservers, databases, updates, patches, installs and configuring.Everything your IT team used to do when you boughtsoftware, you can now buy as a service. Implicit in this proposalis - ―We‖ will manage and secure your data. The biggesthurdle most companies face in letting someone outsidethe company manage and maintain corporate intellectualproperty is data security. Security testing is not optional.In some cases, security testing is performed by specializedteams with their own tools. In most cases I see, thetest teams are integral to security testing in:Test case design.Data design.Stress points.Knowledge of common and extreme userscenarios.Reusing existing automation for security testing.VirtualizationThe primary impact of virtualization is the ability to dothings faster.Virtualization makes managing test environments significantlyeasier. Setting up the environments requires thesame amount of work, but regular re-use becomes significantlyeasier. Management and control of environmentsthrough virtualization is now a basic tester skill for:The common use of the cloud in software developmenttoday is virtualization of environments to quickly sharetools, resources and run simulations of both hardware andsoftware. This becomes more relevant in the context of thecurrent explosion in the mobile applications market forsmartphones and tablets. These applications require shorterdevelopment cycles and automated virtual regressionsmaking cloud testing cost-effective and efficient.SaaSSoftware-as-a-Service presents unique test method challengesthat are different from process changes with SaaS.A positive aspect of SaaS testing is the ability for teams tocontrol the production environment. There is no server sidecompatibility testing and even client-side access can berestricted and controlled.Executing tests on multiple virtual servers, clients,and configurations using multiple defined data setsincreases test coverage significantly.Reproducing user environments and bugs is easier.Easy tracking, snapshots and reproducibility to aidteam communication.Whatever resources are constrained, such as database,service or tools, your test automation andmanual execution can be virtualized for independent,faster, shared or not, independent execution.Facilitate faster regression testing economicallyusing many virtual machines.Virtualization frees-up your desktop.Performance bottlenecks and problems are common withvirtualization test projects. Automation may be more complexto set-up. Daily deployment of new builds to multiple (think100s) of VMs needs skill to run smoothly. To be effective,your team will probably need a virtualization or tool expert.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


14Devices That ThinkEmbedded system testing is very different than applicationtesting. With the explosion in things that think and ubiquitouscomputing/sensors and devices that interact, andoften provide some components of big data, new test methodsare needed.As an overview of some of the complications: How to generate a function call and parameters: Need for new tools, APIs, various input systems. Need for virtual environments. Separation between the application developmentand execution platforms (the software will not bedeveloped on the systems it will eventually run on.This is very different from traditional application development). How to simulate real-time conditions.Difficulty with test validation points or observed behaviors: Parameters returned by functions or received messages. Value of global variables. Ordering and timing of information. Potentially no access to the hardware until finalphases of development. All these will potentially complicate test automation.With the added worry that many embedded systems aresafety-critical systems we will need technical training, moretools, new thinking, new methods and more sophisticatedtest automation for devices that think with no UI.Big DataBig Data methods are not jump running a few SQL queries.Saying big data is difficult to work with using relational databases,desktop statistics and visualization packages is anunderstatement.Instead, "massively parallel software running on tens, hundreds,or even thousands of servers" is required. What isconsidered "big data" varies depending on the capabilitiesof the organization managing the dataset, and on the capabilitiesof the applications that are traditionally used to processand analyze the data set in its domain. For some organizations,facing hundreds of gigabytes of data for thefirst time may trigger a need to reconsider data managementoptions. For others, it may take tens or hundreds ofterabytes before data size becomes a significant consideration(www.en.wikipedia.org/wiki/Big_data).Correlating global purchases at Wal-Mart with direct advertising,marketing, sales discounts, new product placement,inventory and thousands of other analyses that can bepulled from the 2.5 petabytes of data creates serious testingchallenges. Testing the data, the algorithms, analysis,the presentation, the interface, all at the petabyte levelsrequires software testing by people like you and I.This scale of testing will require huge virtualization efforts,a new test environment set-up, management skills, datamanagement, selection and expected result prediction, aswell as the obvious data analysis skills. There will be heavytool use in big data testing. Quick facility in aggressive automationis a pre-requisite.MobilityMuch has already been written about the increased needsfor mobile testing. I will focus on two less discussed ideas.There is less and less focus on the phone aspect of mobiledevices, and more focus on intersecting technologies. Fromcommon intersections such as Wi-Fi, Bluetooth and GPS tonewer and high growth intersections of RFID, the currentlyhot NFC (near-field communication), ANT+ (used mainly formedical devices), QR Code (quick response code) and agrowing variety of scanners. As with things that think, learningnew technology, new input tools and methods, capturingoutput for validation and observation and complexitiesfor test automation are demanded with fast ramp-up.Security and performance testing are two commonly neglectedaspects of mobile testing-. Much lip-service is givento the need for these test methods which need differentplanning, skill, expensive tools and time to execute. In today’sfast-at-all-cost delivery environments these test methodsvery often get postponed until a crisis occurs (checkout ―Mobile Testing By the Numbers‖ for information onhow much mobile apps are performance tested). Testersneed to be excellent at risk assessment and planning forthe times when these are added back into the test strategy.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


15New Input TechnologiesIn the past, input variation meant testing mainly keyboardsfor Japanese and Chinese character input. It’s no longerthat simple. Even the recent phrase, ―Touch, type or tell‖must now include gesture. Add in the new and more commonvirtual thumbwheels and the range of innovation fromcompanies such as Swype and Nuance and even the mosttraditional business productivity app must be touch-screenand voice enabled. As with many of the innovations changingthe landscape of software testing today, technologyspecific training, new skills and new tools are a prerequisiteto testing effectively.and automation is mandatory. Tools that support actionbasedtesting can help you achieve the highest productionand more effective levels of automation. If this includesmore programmers focused on designing code for testabilityand automation stability, so be it—the testing landscapeis undergoing revolutionary change. Leverage technologywherever possible!SummaryBe prepared! The landscape of software testing haschanged forever.The world, business, the marketplace, and society havemoved toward ubiquitous computing. Things that think areeverywhere. And what we test changes how we test. Theflood of new, non-traditional products and services hasgreatly increased the skills needed to test. And, as always,test everything, but do it faster and cheaper.If you are not mobile you will be. If your product has no mobility,it will.If you are not fully integrated with your programmers andeveryone on the development team, you will be.If you do not currently have an expanded role in security,scalability, load and performance testing- you will.If you are not doing high-volume automation, you will be.ToolsIf you are not virtualized, you will be.A theme in the new development world is: always leveragetechnology. Regardless of your product market space, yourteam’s tool use should have recently increased quite significantly.There has been a new focus on test tools like neverbefore.While I have stated testing tools often lag behind developmenttools, the lag is getting smaller. With business seeingthe need for speed and testing often being the bottle-neckdue to lack of good tools, skills, test environments and accessto builds and code, there has been huge change intooling for test teams. Today, the availability of significantlybetter test automation frameworks supporting more platformsand devices and integrating with more ALM tools isnot only commonplace, but essential. And it’s becomingmore common for test teams to take over, or bring keymembers in to own the continuous integration tool.In addition to ALM and Continuous Integration, as stated earlier,virtualization tools are widely used today and are becomingan IT necessity.You will probably need tools specific to a technology(example: Google’s Go! or F#), platform, or device (example:input devices, simulators and emulators) as well.Significant, high-volume test automation is essential, notoptional. Old style test automation will not meet projectneeds today. Fast, efficient, low-maintenance test designIf you are not fully distributed, you will be.If you are not thinking/re-thinking what you distribute andwhat you keep in the home office, you will be. ■About MichaelReferencesWikipediaMichael Hackett, Senior Vice President, isa founding partner of <strong>LogiGear</strong> Corporation.He has almost two decades of experiencein software engineering and the testingof shrink-wrap, client/server and webbasedapplications in Banking, Securities,Healthcare and Consumer Electronics.Application testing in the CloudBy Joe Fernandez, Oracle and Frank Gemmer, SiemensCES 2013: State Of The CE Industry And Trends To WatchBy Eric Savitz, Forbes January 2013The Law of Accelerating ReturnsBy Ray KurzweilW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


FeatureIt's no secret that the cloud is growing at an exponentialrate. By 2016, two-thirds of the world's server workloadswill exist in the cloud. But according to Cisco's2012 Cloud Index, less than half of server workloads currentlyrun in the cloud. Closing the gap between currentcapabilities and future requirements is a mission-criticalpriority for businesses across a range of industries. Withoutadequate planning and preparation, the race to the cloudcan easily become a long slog through a minefield ofmissed opportunities, user failures and IT nightmares.As more and more workloads make their way to the cloudeach year, enterprises have a vested interest in expandingnetwork capabilities and evolving critical data center infrastructureto accommodate an ever-increasing array of cloud-based applications and data storage requirements.Key Trends in Cloud TechnologySeveral trends are driving the migration of applications anddata to the cloud. Agility: Cloud deployment enables businesses to improveperformance and functionality quickly, launchingnew applications without a corresponding need foradditional infrastructure. Agility is especially importantin new and young companies, many of which lack thetime and resources to deploy a range of diverse applicationsinternally.16Deploying the Cloud?Make 2013 the Year to Do It RightLeverage the benefits of cloud technology across your enterprise.By Pete Schmitt, Customer Storage Inc. Consumerization of IT: The Bring Your Own Device(BYOD) trend is enabling companies to expand the useof technology through the use of employee-owned devices.The cloud is playing an important role in helpingorganizations keep up with the pace of BYOD and deliveranytime, anywhere access to workers. Cost Drivers: Financial metrics are also a motivatingfactor in the race to the cloud. In general, Software-asa-Service(SaaS) is cheaper, faster and easier thantraditional deployment models - reducing the cost ofinfrastructure, physical space and IT labor.Preparing for the CloudSuccessful preparation for future cloud workloads requiresplanning. By strategically adapting your network capacity, datacenter and other critical IT functions, you can substantiallyimprove your organization's ability to operate in the cloud.Today's networks must be capable of handling constantinteractions characterized by rich media and heavy content,particularly as users migrate away from email toward interactionsvia social media and other channels. Consequently,networks and data centers must expand to support instantaccess to many different types of content beyond email.The first step of network expansion is a comprehensiveassessment of your organization's app portfolio. In mostcases, executive decision-makers are unaware of the scopeof applications that are running in the organization. Once allof the applications that are currently running in your organizationhave been identified, they need to be ranked andcategorized according to future requirements. While someapplications may need to remain in-house, others can betransferred to a public cloud environment. From there, theorganization can begin to evaluate how to expand the networkto manage future workloads.As virtualization becomes more prevalent in data centers andcompanies adopt a cloud-based strategy, network architectsneed to rethink and redesign their current infrastructure toadapt to the new traffic patterns. What once used to be primarily"North-South" network traffic flow is now becoming"East-West." Environments are becoming highly dynamic;workloads are moving to different physical locations on thenetwork as virtual servers are migrated and clients moveabout the building. The architectures and networking techniquesof yesterday are not necessarily well suited to thearchitectures and applications of today and tomorrow.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


17A thorough understanding of networking, the infrastructurethat is its foundation, and its relationship to applications isnecessary when architecting a data center network that iscapable of supporting and adapting to future challengesthat arise as a result of virtualization and cloud computing.The solution must address all aspects of application delivery- security, availability, performance, and visibility - whileexhibiting the qualities and characteristics that define cloudarchitectures including affordability and elastic scalability.The data must be protected against attacks, intrusions,breaches, and leaks and categorized based on its importanceand network resource needs with Quality-of-Service (QoS) capabilities.Storage and Backup is another key to preparing for cloudmigration. Security is a top-of-mind issue in the cloud. Althoughcloud deployments offer real benefits, your organizationneeds to know that sensitive data will remain secure.The process for preparing for cloud-based data storage andbackup mirrors the process for evaluating network expansionrequirements. Starting with an assessment of currentdata sources and storage routines, the organization needsto evaluate what types of data can eventually either integratewith or be completely transferred to the cloud.Equipped with this information, the organization can beginto identify the technology gaps that need to be addressed tomeet future cloud storage and backup requirements.Purpose-built appliances provide fast backup and restoreand deliver local-like performance while using the cloud forsecure off-site storage. This helps avoid the need to provisionand manage a secondary site for Disaster Recovery(DR) or long-term storage. This can dramatically reducecapital spending, streamline IT infrastructure, and enablepayback periods that are measured in months, not years.Appliances can be combined with existing data protectionapplications and private or public clouds, creating a lowcost, highly scalable storage tier for old or infrequently accesseddata. Appliances also allow organizations of all sizesto modernize their data protection architecture, eliminatetape, improve scalability, and improve Disaster Recoveryreadiness. Cloud storage allows organizations to leverage apay-for-use pricing model and anywhere availability.Increased virtualization will alleviate some of the cloud migrationchallenges upfront. Cloud technology enables organizationsto move servers to the cloud and back in an integratedand strategic manner. In fact, the use of virtualizationcan also play an important role in preparing the organization'sculture and stakeholders for future cloud deployments.By increasing the use of virtualization now, you canencourage greater acceptance of the cloud across yourenterprise. In essence, virtualization can serve as a bridgeto the cloud.It is the key technology that enables the cloud, and withoutit there is no cloud. The ability to separate the OS and applicationfrom the hardware allows it to be the foundationrequired for on-demand cloud services. The encapsulationoffered in virtualization and the mobility that enables a livevirtual machine to be moved with no downtime for the applicationis what the cloud is built on. If you look at virtualization/ cloud computing as a whole, it really is not about aproduct, but a journey. Companies initially enter the worldof virtualization because they just can't keep up with theincreased scale, complexity, and management requirementswhile maintaining their current traditional infrastructure.This leads them to the first step in the virtualization journey,which is to consolidate their resources/infrastructure to getbetter utilization of their servers and to reduce their energycost. Higher levels of abstraction allow companies to takeadvantage of the intelligence built into the virtualizationsoftware. Intelligent software allows High Availability (HA)and Replication, load balancing, pooled resources, selfautomation/orchestration,service definitions or serviceprofiles, templates, policies, self-service portal, service catalog(s),security and identity management, system monitoring/management,capacity planning, billing and chargeback,and licensing.It's important to understand that the ability to handle futurecloud-based workloads will present different challenges andconcerns for the stakeholders in your organization; usersare motivated by ease-of-use and increased access to applicationsand data, CIOs are focused on control, ownershipand security and CFOs are primarily concerned about thecost savings, rate of return and OpEx versus CapEx, etc. Bythoughtfully and strategically preparing for future cloudopportunities, your organization can address these concernsand fully leverage the benefits of cloud technologyacross the enterprise. ■About PetePete Schmitt is Vice President of Engineering at CustomerStorage Inc. Since 2002, cStor has helped companiesstrategize, create, and implement best in class data centersolutions that address business needs. cStor’s proven capabilitieswith key data center technologies provides clientswith a fresh perspective, the ability to collaborate with recognizeddata center experts, and the confidence that goalswill be met.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


18FeatureService Virtualization Rises to Cloud AppTesting ChallengeService virtualization provides a way to get around that scale issue that is not only practical,but more importantly should lead to better code being deployed the first time every time.By Michael Vizard, IT Business EdgeOne of the challenges with building an applicationthese days is the number of dependencies thatapplication will actually have on other applications.Ideally in order to know how that application will actuallyperform, application developers would be able to test theirapplication against the application it depends on running inproduction. The odds of that happening, however, are slimto none, especially if that other application is running as acloud service that has to be always available to end users.To solve this problem developers are increasingly turning toservice virtualization, which is an emerging set of applicationtesting technologies that allows a developer to create areplica of another application in a testing environment. Infact, a recent survey of 200 in‐house software developmentexecutives and managers from enterprises with revenues ofmore than US $1 billion dollars in North America ‐ the majority(71%) with over $2 billion annual revenues – conductedby the market research firm Coleman Parkes Researchon behalf of CA Technologies, found that not only does theinability to adequately test applications result in misseddeadlines, entire functions wind up being eliminated andthe development team as whole lacks confidence that theapplication will work as advertised.Given the often limited scope of most application testingthat may not be all that surprising. In fact, interest in agiledevelopment methodologies aside, as the amount of liabilityattached to an application increases the more cautious anorganization becomes. What’s definitely needed, saysShridhar Mittal, general manager for service virtualization atCA Technologies, is a new approach to testing applicationsthat for the most part are mashups of any number of existingapplications that often have dramatically different servicecharacteristics. The challenge, of course, is figuring whichone of those applications might adversely affect the performanceof your application before your application discoversthat issue in a production environment, says Mittal.Otherwise, says Mittal, all any organization is doing is releasingcode on a little more than hope and a prayer that itwill actually work.As applications become increasingly borderless thanksmainly due to the proliferation of APIs that serve to makeapplications more accessible, the more tempting it becomesto invoke third-party APIs. But as we all know, thequality of APIs tends to vary widely across the Web. Rightnow many organizations are using agile development methodologiesthat in many instances amount to little more thantrial and error when it comes to invoking APIs.As the number of application releases and updates thatorganization are rolling out in a given year steadily increasesit’s pretty clear that existing approaches to testing applicationswon’t scale in the age of the cloud. Service virtualizationprovides a way to get around that scale issue that isnot only practical, but more importantly should lead to bettercode being deployed the first time every time. ■Originally published on ProgrammableWeb(programmableweb.com)About MichaelMichael is the Editor in Chief of InfoWorldMedia Group where he has beencovering computer technology for morethan 14 years. He is also a member ofthe senior leadership team, which providesthe strategic vision for InfoWorldMedia Group.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


Glossar y : Sof tware Tes ting LandscapeVirtualization: Virtualization (or virtualisation) is thesimulation of the software and/or hardware uponwhich other software runs. This simulated environmentis called a virtual machine . There are manyforms of virtualization, distinguished primarily bycomputing architecture layer. Virtualized componentsmay include hardware platforms, operatingsystems (OS), storage devices, network devices orother resources.Big Data: A collection of data sets so large and complexthat it becomes difficult to process using onhanddatabase management tools or traditionaldata processing applications. The challenges includecapture, curation, storage, search, sharing,analysis, and visualization.ALM: A continuous process of managing the life ofan application through governance, developmentand maintenance. ALM is the marriage of businessmanagement to software engineering made possibleby tools that facilitate and integrate requirementsmanagement, architecture, coding, testing,tracking, and release management.Agile: Characterized by quickness, lightness, andease of movement; nimble. Not necessarily characterizedby fast speed.Agile software development is a software developmentpractice based on iterative and incrementaldevelopment where requirements and solutionsevolve through collaboration between selforganizing,cross-functional teams. It promotesadaptive planning, evolutionary development anddelivery, a time-boxed iterative approach, and encouragesrapid and flexible response to change.Kanban: Kanban is a method for developing softwareproducts & processes with an emphasis onjust-in-time delivery while not overloading the softwaredevelopers. It emphasizes that developers pullwork from a queue, and the process, from definitionof a task to its delivery to the customer, is displayedfor participants to see.,Offshoring/GSD/DSD: Global Software Development(GSD) is "software work undertaken at geographicallyseparated locations across nationalboundaries in a coordinated fashion involving realtime (synchronous) and asynchronous interaction".19Distributed development is a software developmentmodel in which IT teams spread across geographicallines collaborate on applications or various software.These teams are often separated by miniprojectsthat are brought together for a final softwarebuildout.Distributed development is a familiar IT approach,but source code control and other issues of the recentpast make it less than ideal. However, modernand advanced Web-based tools and collaborativetechniques allow teams to work effectively in a distributedfashion.Cloud computing: The use of computing resources(hardware and software) that are delivered as aservice over a network (typically the Internet). Thename comes from the use of a cloud-shaped symbolas an abstraction for the complex infrastructureit contains in system diagrams. Cloud computingentrusts remote services with a user's data, softwareand computation.Testing in the Cloud: Cloud Testing uses cloud infrastructurefor software testing. Organizations pursuingtesting in general, load, performance testing,and production service monitoring in particular arechallenged by several problems like limited testbudget, meeting deadlines. High costs per test,large number of test cases, and little or no reuse oftests and geographical distribution of users add tothe challenges.Testing cloud apps: Cloud testing is often seen asonly performance or load tests, however, as discussedearlier it covers many other types of testing.Cloud computing itself is often referred to as themarriage of software as a service (SaaS) and utilitycomputing. In regard to test execution, the softwareoffered as a service may be a transaction generatorand the cloud provider's infrastructure software, ormay just be the latter. Distributed Systems andParallel Systems mainly use this approach for testing,because of their inherent complex nature. D-Cloud is an example of such a software testing environment.SaaS: Sometimes referred to as "on-demand software",is a software delivery model in which softwareand associated data are centrally hosted onthe cloud. SaaS is typically accessed by users usinga thin client via a web browser.Sources: Wikipedia, TechopediaW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


Book Re vi ew20How Google Tests SoftwareBy Adrian Woodhead, Last.fmHaving developed software for nearly fifteenyears, I remember the dark days before testingwas all the rage and the large number of bugsthat had to be arduously found and fixed manually. Thenext step was nervously releasing the code without thesafety net of a test bed and having no idea if one hadintroduced regressions or new bugs.When I first came across unit testing I ardently embracedit and am a huge fan of testing in various forms — fromautomated to smoke tests to performance and load teststo end user and exploratory testing. So it was with muchenthusiasm that I picked up How Google Tests Software‖—written by some of the big names in testing atGoogle. I was hoping it would give me fresh insights intotesting software at "Google Scale" as promised on theback cover, hopefully coupled with some innovative newtechniques and tips. While partially succeeding on thesefronts, the book as a whole didn't quite live up to my expectationsand feels like a missed opportunity.The book is written in an informal, easy to read mannerand organized in such a way that readers can read chaptersin any order or just choose to focus on the parts thatinterest them. One annoying layout choice is to highlightand repeat certain key sentences (as is often done inmagazines) resulting in one reading the same thing twice,often only words away from the original sentence. Thankfullythis is only the case in the first two chapters, but ithighlights the variable quality of this book — possibly dueto the authors having worked separately on differentchapters. How Google Tests Software isn't a book forpeople new to testing or software development. The authorsassume you know a fair amount about the softwaredevelopment lifecycle, where testing fits into this andwhat different forms testing can take. It is also largelytechnology neutral, using specific examples of testingsoftware that Google uses only to illustrate concepts.After a brief introduction as to how testing has evolvedover time at Google, the book devotes a chapter to eachof the key testing-related roles in the company: the'Software Engineer in Test' (SET), the 'Test Engineer' (TE)and the 'Test Engineering Manager' (TEM). SETs are coderswho focus on writing tests or frameworks and infrastructureto support other coders in their testing. The TEhas a broader, less well-defined role and is tasked withlooking at the bigger picture of the product in questionand its impact on users and how it fits into the broadersoftware ecosystem. These two sections form the bulk ofthe book in terms of pages and interesting content. TheTEM is essentially what the name says — someone whomanages testers and testing and coordinates these activitiesat a higher level within Google.The descriptions of each of these testing roles highlightsthe ways Google's thinking about testing has maturedand also shows how some of these approaches differfrom other companies’. There are also explanations of thetools and processes that people in these roles use andfollow and this for me was the most interesting part of thebook.Topics covered include: specific bug tracking and testplan creation tools; risk analysis; test case managementover time; and automated testing. Particularly of note arediscussions on using bots to perform testing of web pagesto detect differences between software releases, cuttingdown on the amount of human interaction requiredas well as the opposite approach — using more humansvia "crowd sourced testing" among first internal and thenselect groups of external users. The tools that Googleutilizes to simplify tester's jobs by recording steps to reproducebugs and simplifying bug reporting and managementsound very useful. Many of the tools described inthe book are open source (or soon to be opened) and areprobably worth following up on and investigating if this iswhat you do for a living.W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


21In addition to the main body of text most chapters also includeinterviews with Google staff on various testing relatedtopics. Some of these are genuinely interesting and give thereader a good idea of how testing is tackled at Google on apractical level. However some of the interviews fall into the"navel gazing" camp (especially when the authors interviewone of themselves) and feel more like filler material. I enjoyedthe interviews with Google hiring staff the most —their take on how they recruit people for testing roles andthe types of questions they ask and qualities they look formake a lot of sense. The interview with the GMail TEM wasalso good and illustrated how the concepts described in thebook are actually performed in practice. The interviews areclearly marked and can thus be easily skipped or skim readbut one wonders what more useful text could have beenincluded in their place.The book wraps up with a chapter that attempts to describehow Google intends to improve their testing in the future.The most valuable point here is how testing as a separatefunction could "disappear" as it becomes part and parcel ofthe product being developed like any other feature, andthus the responsibility of all of the people working on theproduct as opposed to it being a separate thing. Anotherkey point made throughout the book is how the state oftesting at Google is constantly in flux which makes sense insuch a fast moving and innovative company but leaves onequestioning how much of this book will still be relevant in afew year's time.How Google Tests Software isn't a bad book but neither is ita great one. It has some good parts and will be worth readingfor those who are interested in "all things Google." Foreveryone else I'd recommend skimming through to the partsthat grab your attention most and glossing over the rest. ■About AdrianAdrian is a hands-on technical teamlead at Last.fm in London where heworks with a team focusing on theservices behind the scenes that powerthe popular music website. Prior tothis Adrian worked in Amsterdam for2 mobile startups (both of whichfailed), a bank (good money but dull),a content management system (evenduller), a digital rights company (goodin theory, evil in practice) and in South Africa for a multimediacompany. You can visit his website at http://massdosage.co.zaW W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


Vietnam ViewWhat the Pho?A look into Vietnam’s Signature Dish22Pho [fuh] noun - a Vietnamese soup made of beef or chicken stock with rice noodles and thinslices of beef or chicken.By Brian Letwin, <strong>LogiGear</strong> CorporationPho is a soup made from simple ingredients:Beef, marrow-rich beef bones, star anise, roastedginger, roasted onion, black cardamom, corianderseed, fennel seed and clove. The simplicityof the dish, along with the flavor, contributesto pho’s widespread appeal.Vietnam draws thousands of foodies eachyear who are eager to explore the country’slarge selection of fresh and simplefood. The options are nearly endless, and it’s noexaggeration to say one would need to invest afew years in order to sample everything.Vietnamese cuisine in general has clear regionaldistinctions. Hanoi is famous for xoi xeo, the dishwith sticky rice, fried onions, and ground mungbeans. Hue features a plethora of gelatinizedrice and tapas style dishes, and Hoi An is knownfor cau lau, a soup with noodles made from theashes of local trees. But there’s one Vietnamesesoup dish that transcends not just regional geography,but has become popular all over the world– pho.Pho is a relatively recent food creation, makingits first appearance in a Hanoi textile market atthe turn of the 20th century. But the dish hascome a long way from its regional beginningsand today, pho is available everywhere —on thestreet, in small, family run restaurants, and evenin franchise-style restaurants.There’s no denying that the broth is the mostimportant element of Vietnamese pho. It is infact the key to a successful bowl of pho. Oncethe broth is ready, it is served with a generousportion of rice noodles and garnishes such asgreen onions, white onions, Thai basil, fresh chilipeppers, lemon or lime wedges, bean sproutsand coriander or cilantro. Fish sauce, hoisinsauce and chili sauce are all common options foradding more flavor to the broth.Pho lovers judge a bowl served to them in a restaurantby sipping the broth first without puttingin any seasoning or garnishes, but the Vietnamesealways say that the best pho you will evertaste is the one cooked by your own mother.While it’s not uncommon to see women startingthe broth before sunrise to accommodate theimpending breakfast rush—though pho can beeaten at any time of day— many young adults saytheir own mothers have given up making phobecause of the time needed to prepare thebroth. Now if they want pho, they go out to fulfilltheir craving for the dish. Thanks to its popularity,it’s not difficult enjoy a delicious bowl of souprain or shine, day or night. ■W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


23W W W . L O G I G E A R M A G A Z I N E . C O M F E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1


L O G I G E A R M A G A Z I N EF E B R U A R Y 2 0 1 3 ǀ V O L V I I ǀ I S S U E 1United States2015 Pioneer Ct., Suite BSan Mateo, CA 94403Tel +1 650 572 1400Fax +1 650 572 2822Viet Nam, Da Nang7th floor, Dana Book Building76-78 Bach DangHai Chau DistrictTel: +84 511 3655 333Fax: +84 511 3655 336Viet Nam, Ho Chi Minh City1A Phan Xich Long, Ward 2Phu Nhuan DistrictTel +84 8 3995 4072Fax +84 8 3995 4076

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!