Domain 1: Secure Software Concepts

SDLC Components

Fuzzing automatically sends random and invalid inputs to a system to identify issues that could cause the program to crash or exhibit other undesirable behavior.

Bug tracking is the process of recording known issues with software to be fixed in the future. Threat modeling identifies and describes potential threats to software, enabling mitigations to be implemented. Security reviews are periodic audits to validate that security processes are being properly performed during the software development lifecycle (SDLC).

The three foundational system tenets are:

  • Session Management: Management of communication sessions between multiple components

  • Exception Management: Correct management of error conditions

  • Configuration Management: Management of system configurations to ensure functionality and security

Risk Management

The three main ways to manage security risks in production include:

  • Prevention: Blocking a security incident from occurring. Examples of preventative controls include firewalls, access controls, and encryption.

  • Detection: Identifying a security incident that requires mitigation. Detective controls include audit logs, honeypots, and intrusion detection systems (IDS).

  • Response: Mitigating an identified security incident. Incident response efforts are supported by backups, incident response teams (IRTs), and computer forensics.

Prevention should come first, followed by detection and response if prevention fails.

Proactive security actions would involve threat hunting or similar activities.

The three forms of audit-related risk are:

  • Inherent Risk: Risks that are inherent to a particular process or tool that should be managed by security controls.

  • Detection Risk: The risk that a potential issue will be undetected by an audit, causing a potential issue.

  • Control Risk: The risk that security controls will not respond quickly and correctly to detect or prevent a risk event.

Residual risk is the risk that remains after risk management controls have been implemented.

Cryptography

Some of the core goals of cryptographic algorithms include:

  • Confidentiality: Protecting sensitive information from being disclosed to unauthorized parties. Confidentiality can be protected overtly (encryption, hashing) or covertly (steganography, digital watermarking).

  • Integrity: Preventing data from being modified without authorization. Data integrity can be protected by hash functions, digital signatures, parity bits, and cyclic redundancy checking.

  • Availability: Ensuring that authorized personnel can access systems or data. Ensuring that authorized personnel can access systems or data. 99.999% uptime is "five nines availability." Load balancing, backups, and redundant systems are examples of solutions for protecting availability. Load balancing, backups, and redundant systems are examples of solutions for protecting availability.

  • Non-Repudiation: Preventing a user from denying that they took a particular action. Digital signatures and the blockchain’s immutable ledger are examples of protections against repudiation.

Access Control Models

Several access control models exist, including:

  • Mandatory Access Control (MAC): MAC centrally controls access to resources based on a combination of sensitivity labels and user clearances. The military Unclassified/Confidential/Secret/Top Secret model with compartments is an example of a MAC system.

  • Discretionary Access Control (DAC): DAC uses the concepts of users and groups and allows users to define who can access their resources. DAC is commonly used by computers, such as Linux’s support for granting read/write/execute permissions to the owner, group, members, and others.

  • Role-Based Access Control (RBAC): Role-based access control assigns each user with a role and a set of associated permissions, which are used to determine if a request is valid. For example, a software developer may have access to certain systems and tools, while a software manager may have access to HR information that the developer cannot access.

  • Rule-Based Access Control (RBAC): Rule-based access control uses access control lists (ACLs) and Boolean logic to determine if a request is valid. For example, rules may restrict the times during which a system can be accessed or the devices permitted to access sensitive data.

  • Attribute-Based Access Control (ABAC): ABAC assigns attributes to a user’s identity that are used to determine their access. For example, a developer may have a certain set of permissions on one system but a different set on another.

  • Resource-Based Access Control (RBAC): Resource-based access control systems include the Impersonation and Delegation Model used by Kerberos and the Trusted Subsystem Model. Under the Impersonation and Delegation Model, one entity delegates its access and privileges to another, allowing the other entity to impersonate it to achieve some task. The Trusted Subsystem Model controls access based on a trusted device rather than a user’s identity.

Authentication: Verifying the identity of the user. Common authentication factors include something you know (passwords, etc.), something you have (smartphone, etc.), and something you are (biometrics).

The three main types of authentication factors are:

  • Something you know (passwords, etc.)

  • Something you have (smartphone, token, etc.)

  • Something you are (biometrics, etc.)

Authorization: Validating that an authenticated user has the right to perform a particular action. Authorization can be managed via various access control models.

Accountability: Monitoring and recording activity by users on systems. Audit logs should include at a minimum the user’s identity, the action taken, the object acted upon, and the time at which the action was taken.

Availability: Ensuring that authorized personnel can access systems or data. Load balancing, backups, and redundant systems are examples of solutions for protecting availability.

OpenID proves a user’s identity and uses a universal identifier. It does not require a prearranged relationship between the Identity Provider (IdP) and the Relying Party (RP).

OAuth requires a prearranged relationship between an IdP and an RP and generates an access token for user authorization.

Secure Design

Some of the key security design principles include:

  • Economy of Mechanism: Economy of Mechanism or “Keep It Simple” states that the design and implementation of software should be as simple as possible. Complex systems have a larger attack surface and are more difficult to troubleshoot if something goes wrong.

  • Least Privilege: Under the Principle of Least Privilege, users are granted the minimum set of permissions necessary to perform their role.

  • Least Common Mechanism: Least common mechanism states that different processes with different privilege levels should not use the same function or mechanism because it is more difficult to keep these paths separate. Instead, each process should have its own mechanism.

  • Component Reuse: Don’t reinvent the wheel. The use of secure, high-quality components rather than custom code can improve the efficiency and security of software and reduce the attack surface.

  • Open Design: Also known as Kerckhoffs’s Principle, the principle of open design states that a system should not rely on security via obscurity. For example, in encryption algorithms the only secret is the secret key, all details of the encryption algorithm used can be known to an attacker without compromising the security of the system.

  • Psychological Acceptability: If users don’t understand a security control or feel that it obstructs their work, they’ll attempt to work around it, undermining it. Security functionality should be user-friendly and transparent to the user.

  • Separation of Duties: Separation of duties or compartmentalization divides high-risk or critical processes across multiple roles. This reduces the probability that a malicious user could carry out the action or be tricked into doing so.

  • Diversity of Defense: Software defenses should be diverse geographically, technically, etc. This reduces the probability that an event affecting one defense will impact all of them.

  • Resiliency: Software systems should be designed to eliminate single points of failure via backups, redundancy, etc. The failure of a single point of failure could render the system unusable or insecure.

Complete mediation ensures that access controls can't be bypassed by checking them on every request, not just the first one. Assuming that follow-on requests are valid creates the potential for bypassing authentication.

Separation of duties refers to the fact that critical processes (such as approving payments) should be split across multiple people to protect against fraud, social engineering, etc.

The principle of least privilege states that users, applications, etc. should only have the access and privileges needed to do their jobs.

Economy of mechanism (also known as the Keep it Simple principle) states that software design and implementation should be as simple as possible to reduce the risk of errors.

Least common mechanism prevents against sharing mechanisms or functions in code that are used by different users or processes if they have different levels of privilege.

Fail secure means that a system should default to a secure state if something goes wrong, rather than an insecure one. For example, magnetic locks on a secure area should be locked if they lose power.

Threat Modelling

Threat modeling identifies and describes potential threats to software, enabling mitigations to be implemented.

Bug tracking is the process of recording known issues with software to be fixed in the future. Fuzzing automatically sends random and invalid inputs to a system to identify issues that could cause the program to crash or exhibit other undesirable behavior. Security reviews are periodic audits to validate that security processes are being properly performed during the software development lifecycle (SDLC).

Integrity Models

Clark-Wilson is a transaction-based integrity model that defines Constrained Data Items (CDIs) and Unconstrained Data Items (UDIs). Integrity Verification Processes (IVPs) verify that CDI meets integrity rules for a particular state, and Transformation Processes (TPs) can change CDIs from one valid state to another.

Bell-LaPadula is a confidentiality protection model that combines attributes of Mandatory Access Control (MAC) and Discretionary Access Control (DAC). Its Simple Security Rule prevents reading data at a higher level of classification, while its * property prevents writing data to a system with a lower classification level.

Biba is an integrity model designed to protect higher-level, more trustworthy data from being corrupted by lower-level data. Its no-write-up rule blocks systems from writing data to a system with a higher classification level. Its second rule states that a system reading/processing data from a lower-level system will have its integrity level lowered as a result.

Brewer-Nash or the Chinese Wall is a confidentiality model for enterprises. It addresses the case where one group within an organization may have information that cannot be shared with another.

The take-grant model is based on graph theory. A directed graph describes the relationships between nodes with each edge describing the take, grant, read, and write rights between nodes.

Last updated