CompTIA Security+ Sy-601 Certification [Part 3]

For Section 2 of the Security+ Sy-601 exam – Architecture and Design, we will be covering various topics ranging from Enterprise Security Architecture, Virtualization, Cloud and Code Security, Cryptography, Physical Security, and CyberSecurity Resilience.

1. Secure Code Design

Applications are written in the codes and deployed either as Client-Server Model or installed on Client-Side as a local application. The organization maturity model for any software development can be rated on the CMMI model.

  • Software Development- The software development cycle begins with the RFC (Request for Change) from the customer. Following this, the code is updated, and post proper Change Management approvals, it is deployed to production via three different environments. The development environment is the one where software development takes place. The testing environment is used by the Quality assurance team to perform testing on new codes. Once completed, the code is pushed to the Staging environment which is identical to the production, to check compatibility before pushing to the production stage.
  • Software Quality Assurance – The first step in the Software Quality Assurance is the Code Review by any senior team member or team of reviewers in the organization. Software should then be validated and verified with the initially agreed business plan to ensure all requirements are met. Post verification, the code is tested using static and dynamic tests to determine code behavior in different situations based on the input provided. Fuzz testing is an automated way of testing the code behavior based on sending malformed, invalid, or unexpected input to the software.
  • Code Repositories Using a common code repository is vital for Source Control, Code Integrity Management, and avoidance of the Dead Code in any application. It is also important in case the program is calling third-party libraries in the code configuration.
  • Application Control and Patch Management – Applications installed on remote systems can be controlled using Group policies, to the blacklist and whitelist applications. Umbrella systems to update and apply the latest patches in remote devices connected via the organization’s intranet.

2. Application Attacks

Open Web Application Security Project or OWASP, provides a comprehensive analysis of different situations in which an application may be attacked and made to work in an unexpected way. Few commonly faced scenarios involve incorrect data input, wrong or insecure application design, unhardened baseline software, unpatched or vulnerable components used in the application, or improper monitoring procedure.

Key Forms of Applications Attack

  • SQL Injection – Most web applications use some kind of database in the backend for storing user data. SQL injection attack uses malicious SQL code to manipulate applications and access the information stored in the database. The Input Validation technique is important to prevent these kinds of attacks. Another form of web-based attack is Cross-Site Scripting where the attacker places a malicious script on an authentic website, waiting to be run by the visitors. Another similar form of attack is called the Cross-Site Request Forgery, where the malicious script actually sends requests on behalf of the user to another website open in the same browser window.
  • Buffer Overflow – Another form the web-based attack, exploiting user input is the buffer overflow attack, wherein the attacker sends incorrect bulk of input values, expecting unexpected behavior from the underlying application code. Attackers may also look to exploit cookies stored on a browser and shared across in the HTTP request during a user session. Cookie Guessing attack revolves around this similar strategy, where cookie values for any authentication session can be guessed using Session Hijacking and reused to access a web account.
  • Code Executions and Privilege Escalations – Attackers may try to execute malicious code directly on the server-side or gain privileged user access to the application. These two scenarios are tougher to execute, but have larger implications than hijacking user sessions. Applications should follow the practice of least privilege access and regular patching of underlying software components.
  • System Drivers and Memory – Every application runs on the compute resources with hardware drivers, and system memory to run processes. Memory leaks, Null pointers, and DLL injection are some of the ways in which the application may either assign memory to the process without releasing it when the process stops, or point to the information which doesn’t exist.

Security Strategy for Application Attack

  • Input Validation – Input Validation can be implemented on both server-side and client-side, though the recommended approach is towards the client-side. It works by creating the whitelist or blacklist conditional rules for the different options in the input. For ensuring only authenticated users can access an application, we need to ensure all passwords are hashed and salted to avoid sending user information in plain text. Additionally, session messages must be encrypted using TLS-based HTTPS sessions.
  • Output Encoding and Error Handling – Output encoding is a method to defy users sending parameters or tags like in the SQL injection attack. Web application in this case will encode any special parameters like SELECT * SQL statement, and replace it with a random code so that the backend database doesn’t receive SQL query as the input. Error Handling provides the application with specific steps, or error messages to throw whenever received unexpected input, for example in the Buffer overflow attack.
  • Code Signing and Database Security – Code signing ensures all third-party codes written as part of an application are digitally signed by a reliable software owner or developer. Database security covers encryption for the data at rest and having a common strategy of creating tables inside it. Database activity should be monitored regularly for identifying any irregular activity in it.
  • Data de-identification and Data Obfuscation – User critical data like Date of birth, or Credit Card number must be stored in hidden or obfuscated values using Masking, Hashing, or Salting techniques, to ensure anyone, if able to access the database, still cannot read the information in the plain text format.

3. Cryptography

Cryptography is the mechanism of encrypting and decrypting any information using computational algorithms, to ensure the protection of data in transit or data at rest. Cryptography ensures,

  1. Confidentiality of data in all three states, At Rest, In-Transit, and In-Use by using encryption
  2. The Integrity of data by using Hashing
  3. Authentication of users by using Key-Pair
  4. Data Obfuscation by using Salting
  5. Non-Repudiation by using Digital Signatures
  6. Transport Layer Security (TLS) by using Digital Certificates
  • Cryptography Mechanisms – Cryptography uses two broad types of Encryption and Decryption mechanisms, Symmetric (or Using the same key for encrypt and decrypt) and Asymmetric (or Using a key-pair with one Public and one Private key for the encrypt and decrypt).
  • Cipher Data can be encrypted in chunks or blocks, referred to as Block Cipher, or in streams of data, referred to as Stream Cipher.
  • Symmetric Encryption – Symmetric Encryption is the single key encrypt and decrypt mechanism. Various algorithms can be used for this kind of encryption such as now phased out DES or Data Encryption Standard (64 Bit Block and 56 Bit Key) and 3DES (64 Bit Block and 112 Bit Key). AES or Advanced Encryption Standard (128 Bit Block and 128/192/256 Bit Key) and Twofish (128 Bit Block and 128/192/256 Bit Key) are still considered secured and used for symmetric encryption. RC4 (40-2048 Key) was used by WPA, WEP, and TLS is used for the stream cipher but now is considered insecure.
  • Asymmetric Encryption – Asymmetric Encryption is the key-pair encrypt and decrypt mechanism, where User A uses User B’s public key to encrypt a message, which User B can decrypt using the private key from its key-pair. RSA or Rivest, Shamir, and Adelman (1024-2096 Bit Key) is the most commonly used algorithm for asymmetric encryption. PGP combines symmetric and asymmetric encryption, by further encrypting the key generated using the symmetric algorithm, with key-pair using the asymmetric encryption. So, the recipient has to first decrypt using its private key to extract the symmetric key, which in turn shall be used to decrypt the data.

4. Key Management and Secure Exchange

For any encryption to be successful and reliable, the confidentiality of a key is a single factor, which, if compromised, can lead to repercussions. Some of the ways to ensure the secure exchange of a secret key are Diffie-Hellman, Key Escrow, Key Stretching, and Hardware Security Modules.

  • Public Key Infrastructure – PKI is the mechanism of trusting the source of any public key by issuing the X.509 digital certificates with the individual’s or organization’s identity, approved by Certificate Authorities or CA, post a valid Certificate Signing Request (CSR). This certificate can be used to identify servers in the browser’s TLS session or to encrypt messages using the public key associated with them. Certificate Chaining is the scenario where a digital certificate is validated by a chain of CAs both internal and external to the organization. The topmost authority in this scenario is called Root CA. We can also have the same certificate for the multiple domain names, termed as Wildcard certificate. Commonly used certificate formats are PEM, DER, CRT, etc.
  • Hashing – Hashing is the mechanism to turn readable information into a random set of data, which once reverted gives us actual information. Any change in input will completely change hashed output value. Few commonly used hashing functions are Message Digest 5 or MD5 (128-bit hashes) which was proven insecure in 2016 and Secure Hash Algorithm or SHA (SHA-1 with 160 Bit Hashes, SHA-2 with 224,256, 384, and 512 bits Hashes, and SHA-3 with dynamic bit length).
  • Digital Signatures – Digital signatures follow a different approach for identity verification, wherein the individuals use their private key to create digital signatures, which then can be decrypted using the public key from its key-pair to verify the authenticity of the signature. Digital Signature Algorithm (DSA), Rivet-Shamir-Adelman Signature Algorithm (RSA), and Elliptic Curve Digital Signature Algorithm (ECDSA) are the few commonly recommended secure algorithms for creating digital signatures.
  • Crypto Attacks – There may be attacks on the system to crack the encryption and extract data. One such example is the Brute force attack, wherein the attacker just retries all permutations to break open the encrypted data. To improve the chances of the attack being successful, knowledge-based attacks are used, wherein the attacker tries to use common English words or patterns. This creates a trade-off dilemma for the cyber security expert since they must use a long bit key for improved security, but it may lead to higher computational resource usage for encryption and decryption.
  • TLS Handshake – TLS handshake is a subcategory of PKI-based communication using digital certificates, for secure communication between server and client, used by the HTTPS protocol. For this handshake, the server sends its public key with the digital certificate to the client, who then verifies all information for the server with mentioned CA in the certificate. Once it’s verified, the public key sent with this digital certificate is used by the client to encrypt an ephemeral session key. This encrypted session key is sent to the server, which decrypts the message using its own private key. All communication from now onwards uses this session key, which is temporary and unique, hence cannot be read by the eavesdropper.

5. Physical Security

All network equipment and software applications require actual physical data centers to get compute resources. Securing these data centers including authorizing access to them, ensuring server layout, cabling hygiene, and maintaining the data center environment is equally vital for secure, highly available services.

  • Environmental Protection – Data centers are highly susceptible to actual physical damages caused by the internal and external environment. Various factors which can be covered in this protection are temperature (managed by cooling and Hot & Cold Aisle approach), fire (controlled by various types of fire extinguishers and water sprinklers), and natural disasters (handled by creating Cold or Warm Standby site).
  • Data Center Access Control – Access to the data center should only be provided to authorized persons using biometric or physical locks, secured by man-trap doors and motion, video, or noise sensors.
  • Data Storage Sanitization – Data stored on disks or paper must be removed in a way that cannot be recovered after the destruction process. Cleaning, Purging, and Destroying are three ways to remove digital data, while paper shredding is recommended approach for written data.
  • Business Continuity Planning – The Business Continuity Plan is important for identifying the business-critical systems and the risk associated with them based on Business Impact Analysis. For BCP, system owners can design their application in Fault-Tolerant or HA with Load Balancers mode to ensure minimal impact on the mission-critical applications. Various single-point of failures in physical infrastructure is the power supply (recommendation is using dual-source), Storage (recommendation is using RAID), and Networking (recommendation is using multiple paths and port teaming).
  • Disaster Recovery – Disaster Recovery planning involves setting up identical Hot, Cold, or Warm standby sites on different Geo-locations, along with restoring faulty systems with their recent backup. RTO (Recovery Time Objective) determines the time taken to recover a system without impact on the business, while RPO (Recovery Point Objective) determines the state of the system to which it’s restored. Backups may be Full, Incremental (backing up change since the last backup), or Differential (backing up change since the last Full backup).

6. Cloud Security

Cloud security is getting key attention these days with organizations moving their workloads to the virtualized environment. Various Cloud computing service offerings like Infra as a Service, Platform as a Service, and Software as a Service are provided by CSPs or Cloud Service Providers to the Cloud Customers. Cloud Service Partners work with Cloud Service Providers to provide add-on services to existing cloud offerings. MSSPs or Managed Security Service Providers, provide security services to an organization. CASBs or Cloud Access Security Brokers handle security incidents related to the communication between users and the multiple clouds.

Key characteristics of Cloud Computing are,

  1. On-Demand Services – Services available as per need
  2. Scalability – Resources can be added or removed
  3. Elasticity – Resources expand or reduce quickly per demand
  4. Broad Network Access – Available widely via the Internet
  5. Measured Services – Billed according to usage

Another key feature of the cloud computing model is Multi-Tenancy with Isolation or the On-boarding of multiple customers on the same cloud infra using resource pooling (combining all underlying resources) and oversubscription (over assign resources to the customers with the assumption that they won’t be using full capacity at the same time).

  • Virtualization – The motive of Cloud Computing is sharing of the resources among multiple users. Virtualization is the technology, which helps in achieving this goal, using software called hypervisors. A hypervisor may be deployed directly on the bare-metal server (Type-1), or on top of the Host operating system (Type-2).
  • Virtual Machines – Virtual Machines or VMs, are isolated guests deployed on top of virtualized servers, with their own Guest operating system. Various security risks associated with VMs are VM Escape attack (Attacker tries to escape the Guest OS), and VM Sprawl attack (Attacker identifies unused VMs which are not deleted after use, thus accumulating unpatched vulnerabilities).
  • Cloud Resources – Cloud infra provides three key resources for Guest OS running on VMs, Compute (Processors), Network (VLAN, VXLAN, or IPSec based segmented networks), and Storage (Block, Object, or Archive Storage).
  • CSP Managed Applications – Besides providing resources to spin a new VM, CSPs may also provide applications directly to users without the need of creating VMs. Web servers, Databases, or Loadbalancers are some of the most common CSP-Managed applications, provided as SaaS to the end-users. The key benefit of using these services directly is, that they are Cloud-Native, and optimized to give the best performance in the cloud environment.
  • Orchestration and IaC – Cloud orchestration and Infa as a code enable users to spin new VMs, perform various life-cycle operations and remove them when not required using reusable codes and cloud service orchestration modules.
  • Security Responsibilities – Cloud security follows a shared responsibility approach where the onus is divided among service providers and clients. Based on the type of the cloud model (Public, Private, or Hybrid), or the type of service a customer is using (IaaS, PaaS, or SaaS), the scope of the security responsibility shifts among the users and providers.
  • Data Sovereignty – Cloud provides global deployment options, where the user can store data anywhere. Few countries although, may have rules to barre local data to store in any remote location. Cloud security must cover this aspect and ensure data sovereignty is not compromised.
  • Cloud Security Mechanisms – Cloud uses different mechanisms with Defense in Depth approach to ensure on-premise level security to virtualized workloads. Security groups in the cloud are the set of ports, protocols, and IP addresses, which can be attached to any application or VM, to control network access to the application. TLS-based encryption is used for the data in transit while data encryption is used for the data at rest. Virtual Firewalls and Secure Web Gateways can be used to further control and monitor traffic going in and out of the cloud. Cloud providers also have in-built IAM, to ensure users with direct access to the cloud environment can perform only those operations, for which they are allowed as per policies.

This ends Section 2 of the CompTIA Security+ Sy-601 exam. In Part 4, we will proceed to the next section of the exam, Security Implementations.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s