20Testing Guide Introductionassessment is not required (such as for auditing purposes).Since these tests are the last resort for fixing vulnerabilities beforethe application is released to production, it is important thatsuch issues are addressed as recommended by the testing team.The recommendations can include code, design, or configurationchange. At this level, security auditors and information security officersdiscuss the reported security issues and analyze the potentialrisks according to information risk management procedures. Suchprocedures might require the development team to fix all high riskvulnerabilities before the application can be deployed, unless suchrisks are acknowledged and accepted.Developers’ Security TestsSecurity Testing in the Coding Phase: Unit TestsFrom the developer’s perspective, the main objective of securitytests is to validate that code is being developed in compliance withsecure coding standards requirements. Developers’ own codingartifacts (such as functions, methods, classes, APIs, and libraries)need to be functionally validated before being integrated into theapplication build.The security requirements that developers have to follow should bedocumented in secure coding standards and validated with staticand dynamic analysis. If the unit test activity follows a secure codereview, unit tests can validate that code changes required by securecode reviews are properly implemented. Secure code reviewsand source code analysis through source code analysis tools helpdevelopers in identifying security issues in source code as it is developed.By using unit tests and dynamic analysis (e.g., debugging)developers can validate the security functionality of components aswell as verify that the countermeasures being developed mitigateany security risks previously identified through threat modeling andsource code analysis.A good practice for developers is to build security test cases as ageneric security test suite that is part of the existing unit testingframework. A generic security test suite could be derived from previouslydefined use and misuse cases to security test functions,methods and classes. A generic security test suite might includesecurity test cases to validate both positive and negative requirementsfor security controls such as:• Identity, Authentication & Access Control• Input Validation & Encoding• Encryption• User and Session Management• Error and Exception Handling• Auditing and LoggingDevelopers empowered with a source code analysis tool integratedinto their IDE, secure coding standards, and a security unit testingframework can assess and verify the security of the software componentsbeing developed. Security test cases can be run to identifypotential security issues that have root causes in source code: besidesinput and output validation of parameters entering and exitingthe components, these issues include authentication and authorizationchecks done by the component, protection of the data withinthe component, secure exception and error handling, and secureauditing and logging. Unit test frameworks such as Junit, Nunit,and CUnit can be adapted to verify security test requirements. Inthe case of security functional tests, unit level tests can test thefunctionality of security controls at the software component level,such as functions, methods, or classes. For example, a test casecould validate input and output validation (e.g., variable sanitation)and boundary checks for variables by asserting the expected functionalityof the component.The threat scenarios identified with use and misuse cases can beused to document the procedures for testing software components.In the case of authentication components, for example, securityunit tests can assert the functionality of setting an accountlockout as well as the fact that user input parameters cannot beabused to bypass the account lockout (e.g., by setting the accountlockout counter to a negative number).At the component level, security unit tests can validate positive assertionsas well as negative assertions, such as errors and exceptionhandling. Exceptions should be caught without leaving the systemin an insecure state, such as potential denial of service causedby resources not being de-allocated (e.g., connection handles notclosed within a final statement block), as well as potential elevationof privileges (e.g., higher privileges acquired before the exception isthrown and not re-set to the previous level before exiting the function).Secure error handling can validate potential information disclosurevia informative error messages and stack traces.Unit level security test cases can be developed by a security engineerwho is the subject matter expert in software security and isalso responsible for validating that the security issues in the sourcecode have been fixed and can be checked into the integrated systembuild. Typically, the manager of the application builds also makessure that third-party libraries and executable files are security assessedfor potential vulnerabilities before being integrated in theapplication build.Threat scenarios for common vulnerabilities that have root causesin insecure coding can also be documented in the developer’s securitytesting guide. When a fix is implemented for a coding defectidentified with source code analysis, for example, security test casescan verify that the implementation of the code change followsthe secure coding requirements documented in the secure codingstandards.Source code analysis and unit tests can validate that the codechange mitigates the vulnerability exposed by the previously identifiedcoding defect. The results of automated secure code analysiscan also be used as automatic check-in gates for version control, forexample software artifacts cannot be checked into the build withhigh or medium severity coding issues.Functional Testers’ Security TestsSecurity Testing During the Integration and Validation Phase:Integrated System Tests and Operation TestsThe main objective of integrated system tests is to validate the “defensein depth” concept, that is, that the implementation of securitycontrols provides security at different layers. For example, thelack of input validation when calling a component integrated withthe application is often a factor that can be tested with integrationtesting.The integration system test environment is also the first environ-
21Testing Guide Introductionment where testers can simulate real attack scenarios as can bepotentially executed by a malicious external or internal user of theapplication. Security testing at this level can validate whether vulnerabilitiesare real and can be exploited by attackers. For example,a potential vulnerability found in source code can be rated as highrisk because of the exposure to potential malicious users, as wellas because of the potential impact (e.g., access to confidential information).Real attack scenarios can be tested with both manual testing techniquesand penetration testing tools. Security tests of this type arealso referred to as ethical hacking tests. From the security testingperspective, these are risk driven tests and have the objective oftesting the application in the operational environment. The targetis the application build that is representative of the version of theapplication being deployed into production.Including security testing in the integration and validation phaseis critical to identifying vulnerabilities due to integration of componentsas well as validating the exposure of such vulnerabilities.Application security testing requires a specialized set ofskills, including both software and security knowledge, that arenot typical of security engineers.As a result organizations are oftenrequired to security-train their software developers on ethicalhacking techniques, security assessment procedures and tools.A realistic scenario is to develop such resources in-house anddocument them in security testing guides and procedures thattake into account the developer’s security testing knowledge.A so called “security test cases cheat list or check-list”, for example,can provide simple test cases and attack vectors that can be usedby testers to validate exposure to common vulnerabilities such asspoofing, information disclosures, buffer overflows, format strings,SQL injection and XSS injection, XML, SOAP, canonicalization issues,denial of service and managed code and ActiveX controls (e.g., .NET).A first battery of these tests can be performed manually with a verybasic knowledge of software security.The first objective of security tests might be the validation of a setof minimum security requirements. These security test cases mightconsist of manually forcing the application into error and exceptionalstates and gathering knowledge from the application behavior.For example, SQL injection vulnerabilities can be tested manually byinjecting attack vectors through user input and by checking if SQLexceptions are thrown back the user. The evidence of a SQL exceptionerror might be a manifestation of a vulnerability that can beexploited.A more in-depth security test might require the tester’s knowledgeof specialized testing techniques and tools. Besides sourcecode analysis and penetration testing, these techniques include, forexample, source code and binary fault injection, fault propagationanalysis and code coverage, fuzz testing, and reverse engineering.The security testing guide should provide procedures and recommendtools that can be used by security testers to perform suchin-depth security assessments.The next level of security testing after integration system tests is toperform security tests in the user acceptance environment. Thereare unique advantages to performing security tests in the operationalenvironment. The user acceptance tests environment (UAT)is the one that is most representative of the release configuration,with the exception of the data (e.g., test data is used in place of realdata). A characteristic of security testing in UAT is testing for securityconfiguration issues. In some cases these vulnerabilities mightrepresent high risks. For example, the server that hosts the webapplication might not be configured with minimum privileges, validSSL certificate and secure configuration, essential services disabledand web root directory not cleaned from test and administrationweb pages.Security Test Data Analysis and ReportingGoals for Security Test Metrics and MeasurementsDefining the goals for the security testing metrics and measurementsis a prerequisite for using security testing data for risk analysisand management processes. For example, a measurementsuch as the total number of vulnerabilities found with security testsmight quantify the security posture of the application. These measurementsalso help to identify security objectives for software securitytesting.For example, reducing the number of vulnerabilities toan acceptable number (minimum) before the application is deployedinto production.Another manageable goal could be to compare the applicationsecurity posture against a baseline to assess improvements inapplication security processes. For example, the security metricsbaseline might consist of an application that was tested only withpenetration tests. The security data obtained from an applicationthat was also security tested during coding should show an improvement(e.g., fewer number of vulnerabilities) when comparedwith the baseline.In traditional software testing, the number of software defects,such as the bugs found in an application, could provide a measure ofsoftware quality. Similarly, security testing can provide a measureof software security. From the defect management and reportingperspective, software quality and security testing can use similarcategorizations for root causes and defect remediation efforts.From the root cause perspective, a security defect can be due to anerror in design (e.g., security flaws) or due to an error in coding (e.g.,security bug). From the perspective of the effort required to fix adefect, both security and quality defects can be measured in termsof developer hours to implement the fix, the tools and resourcesrequired to fix, and the cost to implement the fix.A characteristic of security test data, compared to quality data,is the categorization in terms of the threat, the exposure ofthe vulnerability, and the potential impact posed by the vulnerabilityto determine the risk. Testing applications for securityconsists of managing technical risks to make sure thatthe application countermeasures meet acceptable levels.For this reason, security testing data needs to support the securityrisk strategy at critical checkpoints during the SDLC.For example, vulnerabilities found in source code with source codeanalysis represent an initial measure of risk. A measure of risk(e.g., high, medium, low) for the vulnerability can be calculated bydetermining the exposure and likelihood factors and by validatingthe vulnerability with penetration tests. The risk metrics associatedto vulnerabilities found with security tests empower businessmanagement to make risk management decisions, such as to decidewhether risks can be accepted, mitigated, or transferred atdifferent levels within the organization (e.g., business as well astechnical risks).