Domain 6: Secure Software Testing

Application testing may include several types of tests, including:

  • Qualification/Acceptance Testing: Validates that an application is fit for use

  • Functional Testing: Validates that the logic of an application is correct

  • Unit Testing: Verifies that a unit of the software performs its intended purpose. This testing is performed during development, enabling it to find issues early.

  • Non-Functional Testing: Validates that applications meet service level agreements (SLAs) regarding usability, reliability, performance, and scalability

Software testers may use various techniques to identify potential issues in an application, including:

  • Failure Mode: Not all errors in an application will cause a crash. Failure testing involves ensuring that erroneous inputs cause a failure and that the fault is properly handled.

  • Regression Testing: Changes to an application’s code can break functional or non-functional requirements. Regression testing is designed to ensure that code still meets requirements after an update.

  • Integration Testing: Applications are deployed in environments alongside other applications and systems. Integration testing ensures that a system as a whole (including multiple different applications) achieves its intended purpose.

  • Continuous Testing: Continuous testing processes build automated testing into development pipelines. This ensures that issues are identified and addressed as early as possible. This use of automated, continuous testing is well-aligned with DevOps and DevSecOps principles.

  • Fuzzing: Fuzzing involves sending malformed and invalid inputs to an application in an attempt to trigger an error. Errors could indicate issues with an application’s logic, and crashes could highlight a flaw in error handling.

  • Penetration Testing: Penetration testing is a human-driven activity in which pen testers duplicate the tools and techniques of real cybercriminals to identify the issues and flaws that an attacker is likely to use.

  • Scanning: Scanners automatically interact with an application to learn information or identify vulnerabilities. For example, network scanners can identify active hosts on a system and the network-connected services that they run, while OS fingerprinting scanners try to identify the OS that a system is running. Vulnerability scanners look for vulnerabilities in an application based on various lists (OWASP, CVEs, PCI DSS, etc.)

  • Regression Testing: Changes to an application’s code can break functional or non-functional requirements. Regression testing is designed to ensure that code still meets requirements after an update.

  • Simulations involve performing testing within a simulated environment that resembles the production environment. Simulation testing can help with identifying configuration issues, usability problems, and similar issues before putting an app into production.

Test Data Management

Production data can be useful for testing but should be properly anonymized. Some anonymization techniques include:

  • Aggregation: Aggregation combines data from multiple different subjects to remove any identifiable information.

  • Sanitization: Sanitization involves removing potentially sensitive data from records.

  • Tokenization: Tokenization replaces sensitive data with a non-sensitive token that represents it on untrusted systems.

  • Minimization: Minimization involves collecting, storing, and processing as little sensitive data as possible.

Defects in software can be classified into five categories:

  • Flaws: Design errors

  • Bugs: Implementation errors

  • Behavioral Anomalies: The application does not operate properly

  • Errors and Faults: Outcome-based issues originating elsewhere

  • Vulnerabilities: Issues that can be exploited by an attacker

Remediation of software bugs should be prioritized based on expected impact.

The Common Vulnerability Scoring System (CVSS) is a MITRE-developed risk scoring system for vulnerabilities. It includes three risk metric groups.

The Base metric group provides a general score for vulnerabilities and associated risk. It is broken up into:

  • Exploitability Metrics: Attack Vector, Attack Complexity, Privileges Required, User Interaction, and Scope

  • Impact Metrics: Confidentiality Impact, Integrity Impact, and Availability Impact

The Temporal metric group shows how risk changes over time. Its metrics include:

  • Exploit Code Maturity

  • Remediation Level

  • Report Confidence

The Environmental metric group reflects the impacts that different environments and defenses have on risk. Its metrics include:

  • Modified Base Metrics

  • Confidentiality Requirement

  • Integrity Requirement

  • Availability Requirement

\

Security Testing Methods

Application security testing can be performed in a few different ways, including:

  • White-Box: White-box testing is performed with knowledge of an application’s internals. It can achieve higher test coverage than black-box testing. Unit testing is an example of white-box testing.

  • Black-Box: Black-box testing is performed without knowledge of an application’s internals, sending inputs to the application and observing the responses. It may identify the vulnerabilities most likely to be exploited by an attacker. Penetration testing is an example of black-box testing.

  • Gray-Box: Gray-box testing sits between white-box and black-box testing. For example, a tester may be granted the same level of knowledge and access as an advanced user but not access to system documentation.

The four main steps in the penetration testing process are:

  1. Reconnaissance: The pen tester explores the target system, identifying active systems and potential vulnerabilities. Vulnerability scanning may be a part of this stage.

  2. Attack and Exploitation: After identifying a vulnerability, the attacker exploits it to gain access to the target system. From there, they might perform additional reconnaissance and exploitation of vulnerabilities to move laterally through the target network and achieve the objectives of the engagement.

  3. Removal of Evidence: After a penetration test is complete, a tester should clean up after themselves, restoring systems to their original state.

  4. Reporting: A penetration test is intended to provide the customer with insight into the vulnerabilities in their systems, making reporting the most valuable stage of the process to the customer. A pen test report should describe actions taken, findings, and recommended mitigations at a minimum.

\

Fuzzing involves sending malformed and invalid inputs to an application in an attempt to trigger an error. Fuzz testing can be classified in a few different ways, including:

  • Smart: Smart fuzzing algorithms identify what can go wrong with an application and creates inputs designed to trigger these issues.

  • Dumb: Dumb fuzzers randomly generate inputs to an application, hoping to stumble upon an issue.

  • Generation-Based: Generation-based fuzzers use specifications for inputs to an algorithm to develop test inputs.

  • Mutation-Based: Mutation-based fuzzing algorithms take known-good inputs and mutate them to create test cases.

A verification and validation (V&V) plan should include administrative requirements for:

  • Anomaly resolution and reporting

  • Exception/deviation policy

  • Baseline and configuration control procedures

  • Standards practices and conventions adopted for guidance

  • Form of the relevant documentation including plans, procedures, cases, and results

V&V is classified as management or technical. Management V&V is intended to check that plans and defined procedures have been followed. Technical V&V ensures that software and its documentation have been developed in accordance with specifications.

Verification ensures that requirements are met. Validation ensures that requirements are correct and complete.

The Open Source Security Testing Methodology Manual (OSSTMM) was developed by the Institute for Security and Open Methodologies (ISECOM) and uses analytical metrics to assess operational security. It includes five sections and test/audit areas:

  • Data Networks: Information security controls

  • Telecommunication: Telecommunications networks

  • Wireless: Mobile devices and wireless networks and devices

  • Physical: Access controls and building and physical perimeter controls

  • Human: Social engineering controls, user security awareness training, and end-user security controls

Cryptography

Cryptography can be fragile and broken in several ways. Some forms of cryptographic validation testing include:

  • Standards Conformance: Verifies that cryptographic code complies with FIPS 140-2 or other regulatory requirements. Examples include the use of approved algorithms, settings, etc.

  • Environment Validation: Verifies that cryptographic code meets requirements for the deployment environment such as those included in the ISO/IEC Common Criteria.

  • Data Validation: Verifies that sensitive data requiring confidentiality protections is appropriately secured using approved and validated cryptography.

  • Cryptographic Implementation: Verifies that cryptographic code correctly generates random values and ensures proper key management.

QA Testing

A test case describes a particular requirement to be tested and how an application will be tested against that requirement.

A test script automates the process of implementing a test case, providing repeatability and speeding the testing process.

A test harness documents all aspects of a testing process including the systems under test and the tools, data, and configurations used during testing.

Test suites are groups of tests. For example, multiple tests focused on performance may be collected into a test suite. \

\

Last updated