For the remaining topics of Section 1 of the CCSP Exam – Cloud concepts, Architecture, and Design, we will be covering various topics ranging from Security Standards, Top Common Threats to the Cloud, and Security concepts relevant to Cloud Computing.
1.3 Understand Security Concepts Relevant to Cloud Computing
1.3.1 Access Control, Network Security, Virtualization Security
Cloud architecture planning with security as the goal must begin from the business decisions, with the vision set by the management, and the budget allocated to the Cloud Architect and Service Manager. Cloud Security Alliance (CSA) provides a general enterprise architecture following a top-down approach, with supporting frameworks like TOGAF, ITIL, and SABSA.
SABSA or Sherwood Applied Business Security Architecture provides a matrix-based model to define conceptual and contextual architecture based on business requirements, and map them with the logical & physical architecture. Operational architecture is also defined here for the performance and operational management of all components.
ITIL or Information Technology Infrastructure Library is a framework for service management and provides customizable strategies for services, like service design, service transition, service operations, etc.
TOGAF or The Open Group Architecture Framework provides a standardized approach to enterprise architecture for BDAT or Business, Data, Application, and Technology areas. The key goal for TOGAF is avoiding vendor lock-in, demonstrable ROI, common communication between all stakeholders, and saving cost & time.
Jericho, now part of the Open Security Group provides the framework for cloud security on the basis Cloud Cube model. Cloud cube model dimensions are based on whether applications required Internal or External location, be running on Proprietary or Open source software, behind the firewall on silo-based Perimeterised systems, or open collaboration De-Perimeterised systems.
A few key abstracts for building an enterprise architecture using cloud technology from NIST are,
- Protect the cloud with in-build and third-party solutions to enable trust, with audits and evidence to prove the claims
- Cross-platform capabilities, Interoperability, Elasticity, Multi-tenancy, and Open-source adoption to build & support cloud-native infrastructure
- Plan role-based access policies, network access controls, security groups, and IAM policies to enhance defense in depth
- Comply & Follow international and local regulations to build a reputation in the market
- Portability, and High Available solutions to avoid a single point of failure and vendor lock-in scenarios
- Cloud governance follows the extension of existing business governance, with relevant reporting, metrics, and activities related to changes, downtime, and upgrades inside pre-defined SLAs
Let us have a deep dive into a few key areas to be considered for building secure cloud architecture,
- Network Security Planning – Network Security planning covers both physical and logical aspects of cloud deployments.
- Physically, all equipment must be inside a secured data center, with access controlled and monitored using centralized facility access solutions based on biometrics and smart cards. For Public Cloud, CSPs own this aspect of security. For Private and Hybrid solutions, organizations may need to purchase solutions for physical access security monitoring.
- Logically, cloud solutions have multiple sources and destinations for different kinds of data, using a wide variety of ports and protocols. All traffic must be secure, encrypted, and segmented for reliable connections with end-users, without impacting the Confidentiality, Integrity, Availability, and Privacy of user sessions. For AWS, VPC or Virtual Private Cloud provides a segmented network for customers, with its own subnets, security groups, network access rules, etc. Various solutions like Transit Gateways, NAT Gateways, or AWS direct connect can help in connecting to these VPCs. A VNet in Azure provides a range of networking functions comparable to AWS Virtual Private Cloud (VPC). These functions include routing, access control, connectivity between virtual machines (VM), DHCP, and virtual private networks (VPN).
- Encryption Solutions – In our CompTIA Security+ study blogs, we discussed extensively encryption and cryptography. So here we will discuss more on how they can be implemented to achieve Confidentiality, Integrity, and Privacy.
- Data in Transit, or in motion between either user’s device to the web application on the cloud, between two VMs on the cloud, and between cloud to non-cloud infrastructure. Such traffic is susceptible to sniffing or man-in-middle attacks. Data can be encrypted by the end-users before sending across the cloud using client-side encryption or they may use CSPs provided encryption solution using server-side encryption. For data in motion towards web servers using API calls, HTTPS using TLS is recommended. Additionally, IP security (IPSec), which has been used extensively, is a transit encryption protocol widely used and adopted for virtual private network (VPN) tunnels.
- Data at rest, including data stored on the storage disks in the cloud, can again be encrypted by the end-users before sending across the cloud using client-side encryption or they may use CSPs provided encryption solution using server-side encryption. The choice of encryption algorithm plays a critical role in the performance of applications, which relies on the data to be encrypted. For the real-time applications, a less secure, smaller bit encryption algorithm should be chosen, while sensitive bulk of data can be secured using more secure algorithms.
- KMS or Key Management Systems provides a secure solution to hold keys used for encryption and decryption of data. KMS solution creates a tradeoff scenario. On one side, to maintain consistency, ease of use, and integration with CSP-provided services, customers may prefer to use the CSP’s KMS solution. On the other side, it may lead to vendor lock-in and defy the segregation of duties, so a customer may look to use the client-side KMS solution to get more control of the generation, and retention of the keys.
- Identity and Access Management (IAM) – Identity and Access Management involves the control, and management of the users, who can access any enterprise resource. This basically works in a three-step process; Identify yourselves, Prove whether the identification provided by you is authentic, and lastly, get authorization to perform activities based on the access role assigned to you. IAM must be implemented with the multi-authentication factor solutions, the most widely used one is a one-time password or OTP. Key phases of IAM solution for Cloud solution are,
- Provisioning and Deprovisioning – Provisioning and Deprovisioning in the context of IAM implies, the setting up and removal of the user accounts. The whole process must be streamlined, standardized, and efficient to ensure traceability of new users created in the system. Deprovisioning or removal is equally vital, based on users either leaving the organization or changing their roles, or the function in the enterprise. Deprovisioning can be considered a risk mitigation technique, by ensuring irrelevant or old entries of users, or assets are removed from the database.
- Centralized Directory – Centralized directory provides the database to store all user details, from where the system can query the information as required. LDAP, or Lightweight Directory Access Protocol, can be used to query the information stored in the central directory server. These servers can be generic open-source LDAP servers or licensed solutions like Microsoft Active Directory. LDAP servers commonly keep data stored in the form of Distinguished Names (DN), with each DN corresponding to an entry inside the server. Centralized directory solutions must be secure and trustworthy, with zero tolerance for accuracy and integrity breaches.
- User Privileges – A compromised user account shall be more impactful if it also happens to have higher access privileges than an admin user. It is vital to ensure up-to-date authorization policies, with regular audits of user accounts and their respective rights. The policy of least privilege and separation of duties are two key mitigation techniques for the risk associated with user privileges. Detection of abnormality is equally important, hence system for logging the user activities, successful and failed retries of a user account, etc. need to be in place.
- Authorization and Access Management – Authorization and Access Management cover authorization of the user to perform a set of actions, as per the policy assigned and authenticating itself with credentials and advanced multi-authentication factors like biometrics, one-time passwords, etc. Both of these activities are vital for overall security posture and rely on the accuracy of the authentication solution along with up-to-date policies for authorization.
- Data Sanitization with Interoperability – While considering cloud-based solutions, it is important to remember that we might need to migrate from one CSP to another or extend our deployment across multi-cloud deployments. The success of this whole operation depends on the interoperability of cloud-native applications deployed on the cloud. Once we cover interoperability, data may need to be transferred from one CSP to another, which must be inaccessible during the whole process to ensure the confidentiality of the information stored. Challenges associated with this migration are,
- Vendor lock-in, where data stored in one cloud solution cannot be moved out, either due to regulatory roadblocks, or proprietary and incompatible storage solutions
- Reconstructing information in a similar format, once data is migrated to a new cloud solution
- Data erasing, since the information stored in CSP’s cloud cannot be erased by traditional degaussing techniques. We need Cryptographic erasing, and data overwriting to make the data inaccessible for recovery. Do note that, both these techniques are not a full-proof solution, but only make recovery harder.
- Virtualization Security – Virtualization is the basis of cloud computing, without which the whole concept ceases to exist. Virtualization ensures sharing of resources from underlying hardware to the multiple guest operating system. While virtualization is the technology behind cloud computing, Hypervisor is the software solution to create virtualized infrastructure in the first place. Two key types of hypervisors are, Type-1 and Type-2. Let us explore, the differences between them and the challenges associated with a security perspective.
- Type-1 Hypervisor – This type of hypervisor is installed directly on bare metal and doesn’t need any host operating system to be in place. All VMs with their guest operating system runs directly on the hypervisor. ESXi and XEN are two widely known type-1 hypervisors. Since we don’t have an additional layer of the Host operating system, the attack surface in type-1 hypervisor-based deployment is significantly reduced. The whole infrastructure is more robust, reliable, and secure, as cloud vendors can have packages for patching, control of input/output packets, and up-to-date drivers.
- Type-2 Hypervisor – This type of hypervisor is installed on top of a Host operating system. All VMs with their guest operating system runs directly on this Host OS plus hypervisor-based infrastructure. Hyper-V and Oracle Virtualbox are two widely known type-2 hypervisors. With an additional layer of the operating system, which may or may not be recommended by the hypervisor software vendors, the attack surface increases. There may be a chance of non-standard or unpatched driver running on the host operating system, compromising the security of the whole cloud infrastructure.
- Breach in Accounts – We have discussed account-related attacks briefly before as well. As long as we have activities, which require access to any system, either through CLI or GUI, or API, there will always remain a constant risk of incursion. Various attack methods like phishing attacks, smoke-screen on the compromised systems of vendors, third-party suppliers, etc. are the few methods used by the attackers to extract credentials and spoof the identity, or sniff and eavesdrop on the traffic. Another widely popular way to access and manage cloud-based deployments is using HTTPS-based APIs. APIs create a different kind of security challenge, particularly when third-party applications use customized API messages to call the functions in the cloud platform for their integration testing and deployments.
- Data Confidentiality, Integrity, and Availability – Similar to the age-old threat of identity spoofing and eavesdropping through account breaching, data breaches are another “always-on” threat. Cloud solutions amplify the surface area of the attack, with the multiple functions as services, deployed with virtual machines on a multi-tenant infrastructure. Data breaches are a highly critical risk for an organization and must be handled with urgency including reporting to relevant authorities based on the type of data stored and the geographical region of the cloud. Data Loss is another sub-category of the data CIA triad, which extensively deals with the loss of information stored on the cloud. It may be due to improper storage planning according to CAP (Consistency, Availability, and Partition) theorem or keeping data unencrypted causing the “man in the middle” attack while data is in transit, or losing the key for the encrypting data. The shared responsibility model complicates things more, as data backup may or may not be the responsibility of the CSPs.
- Service Availability – Unlike Data availability, which may or may not impact the business right away, based on the backup strategy in place, service availability can impact the business right away. Service may become unavailable due to multiple reasons like bad deployment planning without considering Active-Standby sites or without proper scaling strategies based on dimensioning and resource usage. However, from the security perspective, denial of services (DoS) attacks are the most common type of threat faced by services running on the cloud. They may have a huge impact if deployment is planned with a single point of failure as services shall be unavailable.
- Human Threat – Human threat covers multiple scenarios. Malicious insiders like a disgruntled employee or rogue ex-employee can attack the cloud infrastructure, with ample knowledge of the CSP’s security strategies, data store, and network architecture. Another scenario may arise when the immense computing resource of virtualized services on the cloud platforms available from multiple geo-locations with a cheaper cost, is used by attackers to conduct cyber attacks like DDoS or perform hash computing of passwords, on different victims. The third scenario, related to the persons working on the cloud, as well as, systems deployed on the cloud covers the risk due to shared technology. Often, third-party applications may be deployed, or integrated with services running on the cloud, without due checks undertaken to verify if they are patched, or without any known vulnerabilities. It is important in this case to have the defense in the depth model, to ensure that compromising one system is not impacting other entities.
1.3.2 Common Threats
For now, we have covered various strategies to deploy the cloud, different services offered by the CSPs, and various common cloud threats. Let us drill down a bit on threats associated with each service model.
- Threats to Infrastructure as a Service Model – We can associate threats to the Infra as a service model with threats similar to physical devices. Let us move from bottom to top on the IaaS services layers.
- In the bottom-most layer, we have the physical servers. They always have risks associated with unpatched firmware, end-of-support devices, and unchecked access to racks & datacenters.
- The hypervisors installed on the physical servers have multiple risks associated with them. Since cloud promotes multi-tenancy, VMs running on the hypervisor share common physical network interfaces, often a single virtual switch for layer-2 switching, and a shared set of resources. Attackers may install malicious hypervisor, or enable rootkits in the software, causing VM to escape the attack, which enables them to move East-West among the VMs or North-South from VMs to the hypervisor. ARP spoofing or DOS attacks can be conducted on virtual switches.
- VMs running on the hypervisor may themselves have vulnerabilities, which may lead to VM escape attacks from VM to the hypervisor. Another risk associated is loss of control, as infrastructure is provided by CSPs, and there is a must-to-have requirement of the active-Standby planning, else there is always a chance of a single VM going down or inaccessible to the clients.
- Threats to Platform as a Service Model – Platform as a service brings its unique set of challenges. We can compare it with providing a boiling plate to the testers, to perform their testing, providing them reliability that the plate will always be available and accessible. PaaS follows a similar approach with the below-associated risks,
- As developers may be testing their software on a PaaS platform, there may be scenarios where they are using old unpatched software, or leaving dead code behind, which may lead to vulnerability in the platform
- There also lies a grey line between access ownership of the infrastructure, on top of which platform as a service is installed. A developer working on testing the application on PaaS, may not be an expert on the infrastructure, be it physical or virtual, hence must be having limited or no access to the infrastructure components. IAM solutions can be implemented to authenticate and authorize these types of access.
- Threats to Software as a Service Model – Software as a Service offering provides only application access to the end-users. They are unaware of, where the application is deployed, how it is designed, and what kind of security setup, the application, and its base infra are used. Thus, arrives below challenges,
- SaaS applications need to be stable, hence they might be used in backend unpatched old stable versions of the operating systems or processes. This is to ensure the least downtime of the application but may lead to vulnerabilities.
- SaaS applications in the modern day are been deployed regularly on the Cloud-Native model, which may not actually be secured by traditional infosecurity solutions.
1.3.3 Top Threats based on OWASP
OWASP, Open Web Application Security Project, responsible for providing guidelines and recommendations on web application security, provides ten minimum requirements for assessing web application security.
- Injection – Injection, covers scenarios where any sort of malicious data or command is injected as input to the web application, in anticipation of malformed information or extracting stored data in the backend. Examples may include SQL injection.
- Authentication – Authentication involves situations like spoofing identity, stealing or cracking credentials, and breaking into the system without permission. Incomplete IAM solutions may lead to this kind of threat.
- Cross-Site Scripting – XSS or Cross-Site Scripting attacks try to hijack user sessions on the client’s browser by running malicious scripts on the browser itself. This script is added by the attacker on the website HTML page and runs on the end-user browser as part of the dynamic content of the authentic website.
- Direct Reference – Direct referencing is a development phase error, wherein the developer provides direct reference to any information stored in the database, which can be accessed by an attacker without any access blockings.
- Misconfiguration in Software- Misconfiguration in software can be due to multiple reasons, like using outdated operating systems, non-compliance with the latest secure frameworks, and insufficient use of secure application coding practices.
- Data Exposure – Data must be secured at rest and in transit with the latest encryption mechanisms. Web applications are extremely vulnerable today, as users tend to store their credit card numbers, social security numbers, etc. in the trust of the website owner, which can be exploited if not masked properly when stored at the backend.
- Access Controls for Functions – Web applications in the background depending on multiple functions, called for each user action by the front-end. This cross-function access may be missed, or improperly configured, especially in today’s serverless cloud-based deployments. Improper access controls for functions are vulnerable and can be exploited by attackers.
- Cross-Site Request Forgery – Cross-site request forgery is a scenario, where the victim’s session information is sent to a vulnerable website by the attacker. Both vulnerable applications and victims are unaware of this. Thus, the web applications process the request and may revert the request with sensitive information.
- Vulnerable Components – This section considers in detail what we covered in software missing configuration points. Web applications and their backend databases may be using vulnerable, outdated libraries, software modules, etc., which adds to the vulnerability risk for the application.
- Insecure Redirects and Forwards – Web pages are often written with hyperlinks, that land users on different websites. These redirects and forwards can be exploited by the attacker if proper validation is not conducted for the destination pages. Like, users may lend to a website, which is hosted by the attacker to perform phishing attacks.
1.4 Understand Design Principles of Secure Cloud Computing
1.4.1 Cloud Secure Data Lifecycle
In any organization, Information is stored in form of the data and is shared according to the use case. It is the security of the data, which plays a crucial role in defining an organization’s information security structure. Data lifecycle follows six stages, namely, Creation, Storage, Usage, Sharing, Archiving, and Destroying. Irrespective of the stage in the lifecycle, it is the responsibility of the information security team to control and monitor access to the data and its security. The governance of the data is driven by below five factors
- Information Location – It is important to determine where data is physically located, especially in cloud-based deployments. Geographical dynamics of the data may change the rules applied according to the local laws and jurisdiction.
- Information Management – Information management determines the action which can be executed on a set of data.
- Information Classification – Information must be classified based on the level of access allowed for it, for example, information may be confidential, secret, etc.
- Authorization – Authorization determines the authority of any data, and who can access which part of the information.
- Custodianship – The owner of data may store it in the custody of the service provider, based on the prior agreement, hence shifting the ownership of maintaining data confidentiality, and integrity to the custodian.
1.4.2 Business Continuity Planning and Disaster Recovery Planning
Both Business Continuity Planning and Disaster Recovery Planning are vital for an organization’s security goals, however often they are used interchangeably, which is wrong. Business Continuity planning deals extensively with the management’s plan to reduce downtime during an incident of any kind or even planned change management activities. The goal is to keep the business operating and functioning, always. Disaster recovery planning, on the other hand, involves a fallback plan, in event of a disaster, to recover business-critical applications to a partial or full-service state. The goal is to bring back any failed service or application, or system with zero or minimal downtime possible. In the context of cloud-based deployments, it is important to understand that not all services are equal, hence planning needs to consider a similar understanding.
Together, Business Continuity and Disaster Recovery planning are combined in single terminology, BCDR, which is signed as an agreement between CSPs and Cloud customers, primarily on the basis of the service level agreement (SLA) and shared responsibility model. Basically, not all aspects of the application’s continuity and disaster recovery planning are under the ownership of either CSP or Cloud customers, hence must be signed off and agreed upon upfront before designing and deploying the application on the cloud.
1.4.3 Cost Benefit Analysis
Cloud with its resource pooling model brings a huge cost benefit when compared to legacy on-prem systems. There are multiple direct or indirect benefits, which help in cost reduction and improve the financial budget of any organization. A few key benefits are,
- Resource Pooling- We have discussed extensively resource pooling multiple times, how because of cloud and virtualization, multiple applications can share underlying hardware, unlike a traditional monolith application.
- CapEX vs OpEX- With the cloud, organizations are not investing in the capital expenditure of buying and managing infrastructure, instead they now work on handling application deployment and operating them on daily basis. This also reduces the indirect costing related to paying for aircon, or server spacing, and keeping the team of engineers only for managing the hardware and infrastructure.
- Licensing Cost – Applications owners and cloud customers are reducing heavily on the purchase of software licenses or subscriptions, as most of the components are deployed and managed by the CSPs, and they have ownership of licensing and patching the majority of the cloud-based components.
- Cost Associated with Time – It is another indirect benefit, but the cost associated with time can create a big difference as time goes on. A few examples are depreciation cost of the infrastructure, end of support, or end of life by the vendor for their hardware and software, time to recovery, or configuration time for the legacy applications. All these costs are a big overhead on the organization unless shifted to the cloud service providers.
- Miscellaneous Cost – Miscellaneous costs associated with the IT infrastructure are the cost of keeping infrastructure compliant with regular paid audits, focussing on the core competency instead of keeping a team of engineers managing infra, training of the team members, governance, etc.
1.5 Evaluate Cloud Service Providers
1.5.1 Verification Against Criteria
Cloud security compliance is measured on the basis of multiple frameworks and standards. The reason behind the existence of the multiple standards is there is still no single standard is been created for the cloud security evaluation. Hence, we use multiple different standards to evaluate security compliance followed by the organizations.
- ISO27001 – The foremost standard followed for this is vendor agnostic, ISO27001 standard. There are various key domains considered as part of ISO27001, some of them are, Information Security, Asset Management, Cryptography, Incident Management, etc.
- ISO27002 – While ISO27001 is the standard, ISO27002 provides the framework organizations can follow as part of the best practices to comply with ISO27001. It can also be used to implement commonly accepted information security controls and develop its own guidelines based on the customizations.
- ISO27017 – ISO27017, on other hand provides cloud-specific guidelines on top of ISO27001 and ISO27002. The frameworks provided by ISO27017 can be used by both CSPs and Cloud service customers.
- SOC Reports – Service Organization Control (SOC) Reports are prepared by the auditors as part of the comprehensive analysis of the overall completeness or effectiveness of the controls implemented at a Service organization. There are three types of the SOC report,
- SOC 1 – It mainly focuses on the controls of the service provider, that may be relevant to an audit of a subscriber’s financial statements. It can be further subdivided into Type 1 and Type 2, with the Type 1 report having the auditor’s views on the accuracy and completeness of the organization controls in place, while the Type 2 report has similar details with more information about how a company’s control environment operated over its audit period.
- SOC 2 – It focuses on the actual controls put in place in an organization, with respect to operations and compliance. The information captured in SOC 2 report won’t be coming with financial information and hence is insignificant to the financial statements. However, with the kind of details it has, this report is only kept for interested information technology personnel only.
- SOC 3 – It has similar content as SOC 2 report but is shared widely with few details. The only purpose of SOC 3 reports is to bring confidence among the users that the organization is compliant with the controls recommended by the standards.
- NIST 800-53 – NIST is an agency of the US government, which provides standards to be followed by the industry or government programs, related to the existing controls in place and if further enhancement is required as per standard. There may be a few overlaps between guidelines provided by NIST 800-53 and ISO 27001/2 & which can be compared online.
- PCI-DSS – Credit card companies follow PCI-DSS, which provides guidelines to be followed by the merchants and banks, for all electronically transferred money and transactions. It provides a good financial information security structure, with recommendations like encrypted transfer of the bank details, masking of the card numbers, firewall settings for the merchant devices, etc.
1.5.2 System Product Certification
Following a recognized framework is one aspect of ensuring a product’s credibility. Another key part of the recognition process is system and subsystem product certification. This certification process is evaluated by third parties after multiple testings, to approve the claim of any product’s owner, that their product meets relevant standards, and is certified to be launched in the market. The two internationally recognized guidelines and set of frameworks for product certification are below,
- Common Criteria, or CC – Common Criteria are internationally recognized guidelines, created to evaluate information security products, to ensure they meet government standards. Two main components of the CC guidelines are,
- Protection Profiles – Which defines a standard set of security requirements for a specific type of product, such as a firewall, IDS, etc.
- Evaluation Assurance Levels (EAL) – This defines how thoroughly the product is tested, based on seven levels of EAL testings, with seven being the highest, or more thoroughly tested product, while one is the lowest, or generic product testing.
- FIPS or Federal Information Processing Standard – Unlike CC, which covers certifying a product, FIPS covers cryptographic modules for both traditional and cloud-based data. FIPS provides four levels of ratings, to accredit and distinguish cryptography modules used by infrastructure to secure data at rest or in transit.
- Level 1 – Least secure level, to get certified with which, the single secure algorithm can be followed by the module
- Level 2 – This level requires capabilities to secure the perimeter by covering the access to encryption keys along with tamper-proof internal locks
- Level 3 – This level adds to the detection and response mechanism to already considered Level 1 and 2 security structures.
- Level 4 – This is the highest level, which covers adequate and effective mechanisms to provide foolproof rigid security without compromise. It is important to consider that both CC and FIPS are recognized officially in the United States, but may not be widely referred to outside the US, though they can improve product credibility anyhow compared to non-certified solutions.
This ends Part 2 of Domain 1 of the CCSP Exam. In Part 3, we will proceed to the following domain of the syllabus, Cloud Data Security.