From requirements to project effortestimates – work in progress (still?)Charles SymonsFounder & Past President, The Common Software MeasurementInternational ConsortiumCigdem GencelAssistant professor and senior researcher at the Faculty of ComputerScience of the Free University of Bolzano, ItalyREFSQ Conference , Essen, Germany , April 2013© Symons, Gencel 2013

The estimating ‘cone of uncertainty’ is toowide, especially early in a project life> x 2?Requirements uncertain.Estimating methods poor.Indicativeuncertainty(+/-) inwholeprojecteffortestimateFeasibilityRequirementsDesignBuild &Unit TestSystemTestImplement< x 0.5?Executives want more certaintybefore making significantinvestments6

Most project estimating still relies oninformal methods 6• Expert judgement, estimating by analogyMost common,Least data• Use of benchmark data (e.g. ISBSG 7 )• Open estimating method (e.g. COCOMO 8 )• Black box commercial estimating tools(Ideas in this paper are relevant to alltypes of estimating methods)Least common,Most data8

Software ‘product size’ is the biggestdriver of project effortISBSGEstimatingEffort = Functional Size / ProductivityShouldaccountfor NFRCOCOMOIIEstimatingEffort = [Physical Size (SLOC)] N x‘CostDrivers’CommercialEstimatingToolsEffort =fn[Physical Size (SLOC)] x‘CostDrivers’(?)9

Many methods and ‘routes’ are used toestimate size, then effort, early in a projectCounts(e.g. ofUse Cases,User Stories)Approx.FP’sApprox.CFP’sCOSMICCFP’s*EffortEstimatingISBSG*ISO StandardFSM MethodsIFPUGFP’s*Increasing requirements detailSLOCCOCOMO&CommercialTools10

Every conversion from one size-type toanother, or to effort, adds to uncertainty‘Error propagation’ arises due to 9 :• Intrinsic differences between the input andoutput variables (e.g. two size-types) so theydo not correlate well• Errors in measurement of the input(s) to theconversion• Increasing the number of variables andalgorithmic complexity of the conversion(In principle, the more variables we add to the conversion formula,the greater the accuracy, but the more sources of inputmeasurement errors, resulting in greater error propagation.)11

FSM methods can measure a functionalsize of requirements as they evolveIFPUG (Albrecht)• 1970’s empirical model• Business applications• Counts ‘elementaryprocesses’ and ‘files’• Limited size range ofprocesses and files• Most widely usedCOSMIC• Fundamental SE Principles• Business, real-time &infrastructure software• Measures ‘functionalprocesses’• No size limit on functionalprocesses• Rapidly increasing usageFor now, think of IFPUG sizing as a less well-definedand approximate version of COSMIC sizing.12

Use Case sizes vary enormously. Conversionto a functional size is only possible locallyCompany A – Project Type: IUC No # of Trans. FP (Trans. Size) CFP CFP / FPUC1 1 6 27 4.5UC2 1 7 25 3.6UC3 1 6 29 4.8UC4 3 16 46 2.9UC5 1 6 30 5.0UC6 1 6 28 4.7UC7 9 44 112 2.5UC8 9 59 122 2.1UC9 2 12 52 4.3UC10 2 9 25 2.8UC11 1 6 30 5.0UC12 15 88 267 3.0UC13 10 51 113 2.2UC14 5 17 24 1.4UC15 1 6 10 1.7Different project types mayhave different:- # of Transactions / UC- Average size / UCCompany A - Project Type: IIUC No # Trans. FP (Trans.Size) CFP CFP/FPUC1 1 7 22 3.1UC2 1 7 13 1.9UC3 1 7 15 2.1UC4 1 7 25 3.6UC5 1 7 17 2.4UC6 1 7 14 2.0UC10 1 7 13 1.9UC11 1 7 18 2.6UC12 1 7 14 2.0UC13 1 7 20 2.9UC14 1 6 17 2.8UC15 1 7 10 1.4UC16 1 7 17 2.4UC17 1 7 15 2.1UC25 4 24 32 1.3UC26 4 13 16 1.2UC27 1 6 8 1.3UC28 4 12 17 1.413

Conclusions for users of UML and for‘Agilistas’ wanting to estimate total size• Learn how FSM methods define a ‘transaction’*• When analyzing requirements, determine the numberof transactions in each Use Case or User Story• Then either use an average functional size of eachtransaction for:Total size = (No. transactions) x (Av. size per transaction)• or use a more sophisticated approximate FSM methode.g. with an average transaction size per ‘size band’,‘transaction’ = IFPUG ‘elementary process’ orCOSMIC ‘functional process’14

All we know about FS:SLOC conversiontells us that it is very unreliableFunctional Sizes• A measure of functionalrequirements; ignores NFR International standards Technology-independent• Measurement results showan economy of scale withincreasing software size (upto ~2000 FP)SLOC Sizes• A designer’s view of size;implements NFRx No reliable standardsx Technology-dependent• Measurement resultsshow a dis-economy ofscale with increasingsoftware size (up to 1MSLOC)There is little published evidence to supportclaimed FP/SLOC conversion ratios for variousprogramming languages16

Results confirm that CFP and SLOC sizes donot correlate well (neither do FP and SLOC)Software components – oneautomotive company 10Software projects (new) –ISBSG Dataset# ofComp.SLOC / CFPMin Med Max Std. Dev.15 9.2 26.1 65.8 15.8Prog.Lang.# ofProjectsSLOC / CFPMin Med Max Std. Dev.C++ 14 2.95 6.03 20.6 6.0417

Using COCOMO II to estimating effortfrom SLOC also has intrinsic difficultiesSLOC Sizesx Difficult to estimate fromrequirementsx No reliable standardsx Technology-dependentx A designer’s view ofsize Implement allrequirements, incl. NFRCOCOMO II Effortx A complex model withup to 17 variablesx Calibrated by expertjudgment from a limitedrange of projects But ‘open’ and widelyused with localcalibration18

Even best case assumptions show significantuncertainty on any one conversionCounts(e.g. ofUse CasesUser Stories)Approx.FP’sApprox.CFP’s±10%±10%COSMICCFP’s±14%±14%EffortEstimatingISBSGAssume size = 1000 ± 100 CFPIFPUGFP’s±32%SLOC±37%COCOMO&CommercialTools19

If your process involves multiple conversions,expect major error propagationAssume 1000 ± 100 CFPIFPUGFP’sCOSMICCFP’s±31%±43%SLOC±49%Beware of estimating toolsthat:• accept CFP sizes as input• convert 1:1 to FP• convert FP to SLOC andthen to effortEstimatingTool20

Conclusion: estimating methods should becalibrated directly with functional sizesApprox.FP’s(For very earlyrequirements)Approx.CFP’sIFPUGFP’sCOSMICCFP’sISBSGEstimatingCOCOMO&CommercialEstimatingTools21

Conclusions from this section• Do not rely on CFP:FP or on FP:SLOC conversion forestimating• Segment the project types that recur frequently in yourorganization. Then, for each project type:• Adapt your own very early sizing methods, e.g.– ‘counts’ of Use Cases (UCP) or User Stories (USP)– and/or approximate FSM methods• Calibrate your estimating method or tool:– with your own data on software functional size,functional profile and other project attributes– using as few variables as possible to achieve thedesired accuracy22

Agenda• Estimating: the issues• How can we improve early estimation of projecteffort from requirements?• What about Non-Functional Requirements?• Conclusions for Requirements Engineers23

Whether we use our own or public data, weneed consistent measures across four fieldsSystem &Project Reqts.EstimatingProjectDataPerformanceMeasurementBenchmarking24

We can measure software size consistentlyacross our four fields ........ but what about all the other data wecould gather (> 100 possible variables)?• Effort and schedule data (not easy!)• NFR – whatever they are?25

Let’s start with a simple distinction betweenFunctional (FR) and Non-FunctionalRequirements (NFR)FunctionalRequirements“What the softwaremust do”RequirementsNon-FunctionalRequirementsAll other requirementson the system (and theproject?)26

We must consider NF ‘constraints’as well as ‘requirements’Constraints e.g.• inexperienced team• uncertain requirements(NF) Requirementse.g. must:• use C#• use existing COTS• high availabilityWe must recordNFR andconstraints to helpinterpretperformancemeasures &benchmark data,and for estimationpurposes27

Also, we need to distinguish what the‘requirements’ (incl. constraints) apply toSystemSoftwareProjectTechnologyOtherDeliverablesRequirements can apply to any of these things28

NFR vary enormously in importancebetween different types of systemsTypical BusinessApplication• NFR do not vary muchacross many systems• Many may cause thesame % effort overheadfor similar projects(?)ISBSG Estimating OK?vs.Mission Critical System(e.g. air traffic control,trading systems, etc)• “NFR can account forhalf the pages of astatement ofrequirements” 11Segment project types onNFR for more accurateestimates (cf COCOMO)29

So what do we really mean by ‘NFR’?ISO/IEC definitions 12 are really badFunctional Requirement: “A requirement that specifies afunction that a system or system component must beable to perform”Function:: “A task, action, or activity that must beaccomplished to achieve a desired outcome”Non-functional requirement: “A software requirement thatdescribes not what the software will do but how thesoftware will do it.”Example: software performance requirements, software external interfacerequirements, software design constraints, and software quality attributes.Note: Non-functional requirements are sometimes difficult to test, so theyare usually evaluated subjectively.30

Wikipedia definitions are totally useless“Functional requirements define what a system issupposed to do whereas non-functional requirementsdefine how a system is supposed to be”So let’s take an example requirement: ‘Security’‘The system is required to ensure security againstunauthorised access” (Functional?)or“The system is required to be secure againstunauthorized access” (Non-Functional?)31

Many requirements that initially appear asNFR evolve to software FR 12FunctionalRequirementsFunctionalRequirementsCan be sizedNon-FunctionalRequirementsExamples:• Maintainability• Interfaces• Operations• Reliability• Usability• etc‘True’ NFR e.g.• Technology• Project &performanceconstraintsShould berecorded;may bequantifiableProject time-line32

Abran/Al Sarayreh’s findings are endorsed byButcher11 for Mission-Critical Systems“Most Non-Functional Requirements evolve intorequirements for software functionality andfor hardware”“I prefer to distinguish ‘direct’ and ‘indirect’functional requirements”33

I found 108 possible types of NFR.Their usage is grossly inconsistent14NFR RecordingIEEE 804, ISO 9126, Wiki50 NFR’sProject Estimating(COCOMO II)39 NFR’s# common NFR’s?“Interfaces”NFR SizingVAF/TCA & SNAP36 NFR’sBenchmarkingISBSG & SEI48 NFR’s34

Summary so farIndustry understanding of NFR is chaotic• There is no clear accepted definition of NFR.ISO/IEC definitions are actually harmful• Many NFR that are important for estimatingevolve into functional requirements as aproject progresses• Approaches to performance measurement,benchmarking and estimating use largelydifferent sets of NFR and constraintsNow some ideas on the way forward - real WIP!35

Assume the COSMIC definition for a‘Non-Functional Requirement’NFR = ‘Any requirement for or constraint• on a hardware/software system• or on a project* to develop or maintain such a system,except:• a functional user requirement** for software• or a requirement that evolves into a functional userrequirement for software’*If you prefer to define ‘project requirements’ separately from NFR,that’s OK by us** ‘Functional user requirements’ is quite well defined in ISO/IEC14143/136

Divide ‘traditional’ NFR into ‘Quasi NFR’and ‘True NFR”Quasi NFR (34): requirements that may evolvewholly or partly into FUR e.g. usability,interfaces, securityTrue NFR: requirements ... that cannot beimplemented as software functions• Software Constraints (10): e.g. must use C#,execute in batch mode• Technology Constraints (16), e.g. must run onUnix, use data communications• System Constraints(9), e.g. response time, no.of users• ‘Project Constraints (35)’, e.g. budget ≤ $1M,multi-site teamsMeasuretheir FSRecord andtake intoaccount insoftware projectperformancemeasurement,benchmarking,& estimating• ‘Other (System) Deliverables (4)’, e.g.Treat separatelydocumentation, training 37

Rationalise NFR to a manageable set that canbe recorded with performance measurementsfor their interpretation and useExample: Only 9 of 34 Quasi NFR’s need be recordedNo./Type16 x Common9 x Common2 x common forall systems2 x veryuncommon5 x Synonymsor sub-typesExamplesUsability, ReportingAvailability, interfacesAuditabilityEmotional factorsReliability, resilienceProposed ActionCan be 100% accounted for inthe FS; no need to record?Measure the FS and record ona nominal or ratio scaleIgnore (common overhead)IgnoreAccount for in the above38

The biggest need: reconcile datarecording for benchmarking with dataneeded for estimatingData needed for estimating /not considered inbenchmarking (?)Possible solution forbenchmark data?• Project Risk• Project resource or timeconstraints• A ‘Relative Risk’ Index’?• Minimum: record theconstraintsIdeal: a means of analysingproject data to quantify theeffect of such constraints... And do not record data for benchmarking studies thatis never used for interpretation, nor for estimating39

The ‘still-to-do’ list• Refine and define the taxonomy of NFR’sand measurement scales where possible• Test with a range of experts• Publish and publicise• Promote the ideas to suppliers ofbenchmarking services and estimatingmethodsPlease contact Chris Woodward of the UK SoftwareMetrics Association if you would like to help with thisapproach chris.woodward@btinternet.com40

Agenda• Estimating: the issues• How can we improve the early estimation ofproject effort from requirements?• What about Non-Functional Requirements?• Conclusions for Requirements Engineers41

Conclusion: early estimating is enormouslyimportant. It must be improvedObvious areas for improvement:• Recognise the real problems of conversionbetween size measurements and of errorpropagation in estimating• Improve the consistency of data collected forperformance measurement and benchmarking,and data needed for estimating• NFR must be better defined and understood42

Final remark: measurement is not anoverheadAs this is a ‘Requirements Engineering forSoftware Quality’ conference, please consider:– Measurement is intrinsic to any engineeringdiscipline– Functional size measurement provides aquality control on the requirementsIf software requirements cannot be measuredwith some confidence, you cannot begin toestimate the project or build the software reliably44

Thank you for yourattentionCharles Symonscr.symons@btinternet.comwww.cosmicon.comCigdem Gencelcigdem.gencel@unibz.it45

References (1)1. Standish CHAOS Report, 2009, McManus, J. and Wood-Harper, T., ‘A Study in Project Failure’, June20083. Whitfield, D., ‘Cost over-runs, delays and terminations: 105 outsourced publicsector ICT projects’, European Services Strategy Unit, Research Report No. 3,December 20074. ‘Are We Really That Bad? A look at software estimation accuracy Hill, P., ISBSG,CAI webinar, February 2013,‘Why your IT project may be riskier than you think’ Bent Flyvbjerg, AlexanderBudzier, Harvard Business Review, September 20116. ‘A review of studies on expert estimation of software development effort’,Jørgensen, M., The Journal of Systems & Software 70 (2004), 37 – 60.7. ‘Practical Software Project Estimation’, the International Software BenchmarkingStandards Group, Peter Hill (Ed.), McGraw Hill, ISBN 978-0-07-171791-5, 20118. ‘Software Cost Estimation with COCOMO II’, Boehm, B. et al, Englewood Cliffs,NJ:Prentice-Hall, 2000. ISBN 0-13-026692-29. Santillo, L., ‘Error Propagation in Software Measurement and Estimation’, 16 thInternational Workshop on Software Measurement, Potsdam, Germany, 2006(continued)46

References (2)10. Gencel, C., Heldal, R., Lind, K.: On the Relationship between Different SizeMeasures in the Software Life Cycle. Asia-Pacific Software EngineeringConference (APSEC 2009), p.19-26, Malaysia, December 200911. Butcher, C., ‘Delivering Mission-Critical Systems’, BCS Central LondonBranch meeting 18 th November 201012. *ISO/IEC. 24765:2009. ‘Systems and Software Engineering Vocabulary’13. (Example paper) Al-Sarayreh, K.T. and Abran, ‘A. Specification andMeasurement of System Configuration Non Functional Requirements, 20thInternational Workshop on Software Measurement (IWSM 2010), Stuttgart,Germany, 201014. Symons, C., ‘Accounting for Non-Functional Requirements in ProductivityMeasurement, Benchmarking & Estimating’, UKSMA/COSMIC InternationalConference on Software Metrics & Estimating, London, October 2011 , on the COSMIC method and its uses is available for free downloadfrom www.cosmicon.com47

More magazines by this user
Similar magazines