Cloud Security: How to Cover your Blind Spots
Enterprises and businesses of all sizes have already tested the waters in cloud platforms. But how can you safeguard your deployment?
Since agility and speed are among the primary drivers of cloud transformation journeys, security often becomes an afterthought. But with a new organization falling victim to ransomware every 14 seconds, the potential impact of overlooking cloud security can be crippling for any business.
In our latest whitepaper, we explore the often overlooked security considerations for keeping your data safe in the cloud. Learn:
- How to prevent cloud security blind spots
- Why traditional security protection approaches are no longer enough
- How to successfully manage cloud security via the controls of each service model
Cloud computing has come of age and is now mainstream
Enterprises and businesses of all sizes have already tested the waters in cloud platforms. Developers play a significant role in the cloud journey and often have been given full freedom to deploy their apps to the cloud without attention to proper due diligence. Since agility and velocity are among the primary drivers of cloud transformation journeys, security often becomes an afterthought. When we bet big on the cloud, it is absolutely critical to ensure an adequate security foundation is built while embarking upon the journey.
This study puts into perspective the potential impact on enterprises of overlooking cloud security diligence.
Cost of a Data Breach Report 20191
A recent report shows that the average cost of a data breach is $4million. A new organization will fall victim to ransomware every 14 seconds in 2019, and every 11 seconds by 2021. (Source: Cyber Security Ventures)
In addition to the cost of recovering lost data, lax cloud security can result in a range of added costs such as reputation damage, regulatory fines, etc.
In this document, we cover some of the major security blind spots customers often fail to notice while they build their apps and infrastructures in public clouds.
This whitepaper is derived from RCG’s experience in helping enterprises with their cloud migrations, combined with research on industry trends and best practices.
Cloud security blind spots
In a traditional data center model, the Infra-Sec-Ops team diligently protects their fort relying on the formidable command of the technologies and tools they use. Developers don’t worry about the infrastructure or perimeter security. However, in the journey towards the cloud, developers typically take the lead, starting pilots and experiments on their own. They rarely are adequately trained on security or governance aspects of the new software-defined arena. At the same time, the Infrastructure-Sec-Ops team is asked to design the security foundation for the new cloud platform.
Cloud is different where perimeter-based security architecture is concerned. Since the infrastructure team comes from the traditional, datacenter security world, the new cloud services and tools look alien to them. They often fail to design and implement the best possible security foundation in the cloud. This situation is very challenging for a large enterprise because they must comply with tough regulatory norms, yet, at the same time, they should be diligent in deploying workloads with maximum protection against malicious intruders.
In the subsequent sections, we’ll cover some of the most common security considerations to keep in mind while building server infrastructure and apps in the cloud. These pointers are generally applicable to all leading public cloud platforms. However, some of them are specific because of the maturity levels and specialized features of CSPs – AWS, Azure, and GCP.
#1: Privileged Access Management (PAM) in the Cloud
In a recent survey of 1000 global leaders, 74% confirmed that access to privileged accounts was part of the cyber breaches occurring in their IT environment.
Also, the survey2 finds that on average, only 55% of US and UK enterprises use privileged account management solutions to control their cloud workloads. This gap makes privileged access management blind spots an attractive target for attackers. A common mistake made is the sole reliance on a cloud provider’s identity and access management tools. This weakens the defense against intruders. There are also instances3 of organizations going out of business in a single day due to their complacency in managing cloud console access.
Enterprises should consider multi-directory brokering solutions to enhance access control for cloud consoles and critical systems in the cloud.
Foundational tenets for safeguarding privileged access to your cloud environment include:
- Implement password vaults for managing admin credentials4. This could help to enforce the use of highly complex and secure passwords without relying on personal text files to remember them.
- Strengthen authentication: Enhance protection across the entire lifecycle touchpoints of cloud privileged accounts4. Think beyond passwords in implementing additional layers of protection such as multi-factor authentication.
- Implement mechanisms for quickly identifying leaked credentials. Use behavior-based anomaly detection tools that can detect anomalies like time-travel (privileged authentication from two distant geographical locations within a few minutes).
- Automate threat response for any cloud access anomalies5.
- Structure cloud accounts and cloud governance to reduce the blast radius of an account compromise.
- Enable audit logs at all levels in cloud identity and access management. The logs must be constantly examined by the detection tools6 for finding any anomalies.
#2: Open SSH/RDP gate for everyone
This is a very common scenario where the initial cloud explorers (developers) use the cloud admin console to spin-up instances and networks, leaving the default network settings unchanged.
A typical mistake made while provisioning new instances in the cloud is keeping the SSH (port 22) or RDP (port 3389) open to connection from everywhere. This is an ideal scenario for someone attempting a dictionary attack to get into the servers.
Consider using bastion hosts to provide access to cloud servers. A bastion server is a dedicated server from which SSH or RDP sessions to cloud workloads are allowed. Also, implement multi-factor authentication for logins to the bastion host. This will reduce the attack surface against port scans.
Recently Azure released8 a managed bastion service where the Microsoft Azure team will manage the bastion hosts so that customers don’t need to worry about scaling-up, security, etc. for the bastion hosts.
Alternatively, AWS is evangelizing7 a bastion-less world with the introduction of the AWS Systems Manager tool where the shell runtime on the target servers is available via a ‘Session Manager’ feature.
#3: Missing or inadequate end-point detection and response (EDR) tools in server and container instances in the cloud
Recently, Gartner® recommended9 a prioritization of security controls in hybrid cloud server workloads. Even though the end-point detection and response tools are placed at the top of Gartner’s pyramid, it is essential to implement EDR solutions to cover all applicable controls10 in a cloud environment.
Consider exploring modern tools that can protect physical, virtual, and container-based workloads across hybrid and multi-clouds from a single pane of the glass management portal. This is a big ask, as most of the market solutions are still evolving to support this new landscape. However, a great start would be the right combination of integrated security, operations, and toolsets that can tap into the cloud service provider’s native security platform. In addition, care should be taken in license management for agent-based systems deployed in auto-scaling groups of servers or container pods. If the number of agents exceeds the license cap, there could be systems with unregistered agents which may be outdated. This may potentially expose a vulnerable point for attacks.
Over time, configuration drift management is key in cloud environments. Considering the dynamic nature of workloads, enterprises must have clear visibility of the desired configuration of their workloads and should be able to track any deviations from the norm. There are startups11 offering solutions to this problem with the help of machine learning and AI-driven automated remediation against any drift.
Most importantly, partnering with a capable managed services provider who can stitch this together is the key to success.
#4: Open or leaky cloud object storage buckets
AWS object storage service S3 is one of the most popular and widely used services for cloud data storage. However, AWS S3 has also gained much notoriety thanks to the recent data breaches12 involving cloud services.
Most of the S3 leaks occurred because the misconfiguration of storage buckets made them publicly accessible. However, there was a recent attack13 on a popular bank in the USA resulting from the inherent vulnerability of the S3 service itself. The breach affected 100 million bank customers and leaked 140,000 SSNs and nearly 80,000 bank account numbers.
A simple human error of just a few wrong clicks or incorrectly typing a * in the S3 access control configurations could result in making the storage bucket public and leaking millions of records of private data. The financial implications are enormous, and cloud providers14 and security researchers15 are now providing new tools for identifying leaking S3 buckets, enhancing our ability to prevent this situation.
AWS has published a best practice document16 to prevent unsecured S3 buckets. In general, implementation of the below pro-active steps could prevent the exposure of your S3 stored data:
- Bucket audits: Run periodic audits (scans) to ensure that sensitive S3 buckets use the correct policies and are not marked as public.
- Implement zero trust – least privileged access: Grant access only to the required identities for the specific tasks they intend to perform on the S3 buckets. This could reduce the risk of malicious access.
- Enable bucket versioning: Versioning of your S3 buckets could enable the recovery of accidentally deleted data and can also be helpful in the event of any ransomware attacks.
- Enable and enforce MFA delete: Using MFA (Multi-Factor Authentication) to delete critical buckets is highly recommended. Doing so provides an extra layer of security.
- Enable logging and monitor all S3 policy changes: Logging and tracking every request made to access the bucket will help to flag any anomalies and trigger a remedial response.
- Enable AWS Config to track any drift of policy changes and to enforce compliance.
One interesting thing to note here is that, when compared to other cloud providers like Azure and Google object storage services, the AWS S3 permission system is very powerful and extensive which provides greater flexibility in fine-tuning access control lists. However, it is a two-edged sword. The more complex the system, the greater the chance of having wrong configurations when combined with higher-level IAM policy increases. This may increase the attack surface.
#5: Orphaned or ‘Ghost’ instances with default settings
These are quite common in the “shadow IT” side of the cloud where developers spin up cloud servers using cloud consoles. Most of the self-service provisioning is enabled with just a few clicks with default settings. Many pilot environments have been kept running in the cloud with open security vulnerabilities.
New vulnerabilities could emerge that are applicable for the open-source Server OS, or the default network configuration could leave the server-wide open to the internet. This could be a huge risk for potential attacks.
Use the following guardrails
- Reduce shadow IT by implementing effective mechanisms like Cloud Access Security Broker (CASB)
- Prevent cloud console access by non-authorized users/admins
- Enforce a provisioning workflow using a service request process, or pre-approved service catalogs (Cloud Management Platforms)
- Always use hardened or fortified machine images
- Incorporate all steps for enabling log collection and endpoint protection in the provisioning workflow for newly provisioned cloud servers/containers
- Implement Dev-Sec-Ops pipelines to automate environment provisioning. If your environment uses containers, make sure to scan17 the container images for vulnerabilities.
#6: Crypto-jacking of cloud instances and containers
An open vulnerability in the server OS or in the application packages of a cloud instance could bring crypto-jacking malware to the server. The attacker can then use the compute resource of an infected server to mine crypto coins18. This could significantly affect the application servers’ performance and negatively impact the user experience.
- Use modern endpoint protection tools for cloud server workloads
- Use a container vulnerability scanner (e.g.: Clair17) as part of DevSecOps pipelines
- Adopt a zero-trust or least-privileged approach in the entire container life cycle management
- Use CIS benchmark guidelines19 to fortify container images
#7: Absence of log management in the cloud
One of the major concerns of enterprises has been the lack of visibility and control in the cloud environment. In the initial few years of their evolution, cloud providers did not have detailed logging capability for their IaaS and PaaS platforms. However, all of them now provide native solutions for logging anything and everything in the cloud.
In the cloud security context, this is a foundational tenet which we can’t afford to ignore. Implement detailed logging practices and then manage and analyze them using either cloud-native platforms20 or third party solutions21.
The stakes are high because of the software-defined nature of the cloud. Traditional perimeters become blurred in the virtual environment, which is also very dynamic in nature – thanks to containers and microservices-based cloud apps.
One of the more common mistakes in the cloud is enabling all levels of logging while failing to implement the proper analysis tools. We all know that logging comes with a cost that we cannot ignore in a large-scale environment. It is essential to centralize the logs and use best-in-class tools with machine learning capabilities to find anomalies in real-time. At the same time, these log analysis tools must be integrated with your enterprise Security Information and Event Management (SIEM) platforms for triggering incident response.
Recommendations to address log management:
- Use separate, dedicated accounts for implementing centralized log aggregation and analysis22.
- Be aware of these limitations and pick the right tool. There are some practical challenges in orchestrating centralized log analysis across multiple accounts and cloud regions.
- Drive a theme of ‘maximum observability’ across all services, platforms, and workloads in the cloud by enabling audit and access logging wherever possible.
- Automate agent installation for collecting and shipping logs for cloud auto-scaling groups and dynamic container pods. As mentioned earlier in this paper, plan for the license management of agent-based third-party solutions.
- Ensure redundancy in log data, with no exceptions.
- Tag or name the logs appropriately while implementing a centralized log collection to enhance management simplicity.
#8: Cloud API Key exposure
Cloud-native developments drive the proliferation of Cloud API keys in the code. Naïve cloud developers may inadvertently store their code-embedded APIs in unguarded repositories23 where they are likely to become the victims of malicious reconnaissance. When using API keys in your applications, make sure they are secure.
Follow these best practices24 to ensure the security of your API keys:
- Never embed API keys directly in the application code. Instead, store the keys in environment variables or in files outside the source code. Cloud platforms offer Key Management Platforms that can help customers manage the keys.
- Set up application and API key restrictions25. This is a better way to reduce the impact of a compromised API key.
- Recycle API keys periodically and update the applications to use the new keys. This is made easier now with the introduction of a key management system (KMS) platform in the cloud.
- Review27 the code in detail to ensure that it does not contain any API keys before making the code publicly available in public repositories like GitHub.
Build the cloud security foundation
There are threats emerging every day based upon new tactics and techniques. Although one may argue that cloud security tenets are no different from the foundational cybersecurity triad: confidentiality, integrity, and availability, innovations in the cloud occur with breakneck speed. This makes detecting and protecting your crown jewels in the cloud more difficult unless you keep track of adversaries and their newer tricks.
Also, while being vigilant, it’s important to routinely check your preparedness using periodic fire drills designed to emulate current threats. In tracking the newer threats, we see that more and more enterprises use the MITRE ATT&CKTM framework27.
There are close to 200 techniques along with the above tactics listed in the MITRE ATT&CK® framework. It is recommended to watch for emerging (cloud focused) tactics and techniques and map them to controls for remediation.
The Centre for Internet Security (CIS) has defined specific controls28 for managing cloud security. Applicable controls in each service model in the cloud are defined.
- As hybrid and multi-cloud implementations increase, a consistent security and governance framework is an absolute necessity. If enterprises overlook this or address it simply as an afterthought, large and complex cloud environments could pose unique security challenges.
- Traditional security protection approaches like perimeter fortification, signature-based detection, etc. are not enough to secure the cloud. The software-defined nature of the cloud and modern innovative services, like container and serverless systems, complicate protection strategies.
- It is absolutely essential to be aware of the major security blind spots on your journey to the cloud. Once you are covered against the most prevailing threats, you can build a comprehensive security and governance framework which can future-proof your digital ambitions.
- There are numerous start-ups and service providers trying to pitch their tools and solutions which confuse and overwhelm customers.
- Whatever may be the key drivers or triggers for cloud adoption, security should get a front-row seat in your journey to the cloud.
- Security ‘activism’ may initially slow down the agility and velocity gains in the cloud. However, it’s worthwhile to be ruthless in your diligence. This may save the enterprise from ultimately retreating from the cloud. A few enterprises have started their journey back from the cloud because of security and cost challenges.
- Pick a Cloud-Sec-Ops partner who can implement cloud security architecture and incident response capabilities with the right tools which are well integrated with the hybrid-multi-cloud environment and who can automate security operations and policy enforcements.
- Finally, the cloud is not an answer to every business need. Don’t take everything to the cloud. If enterprises carry their junk (non-volatile, less critical, fragile, vulnerable legacy apps) to the cloud on their first trip, it may jeopardize the success of their crown jewel applications in the cloud.
Is your Sec-Ops team ready for this?
Perhaps your team is ready, but the odds are against them. They may be deeply invested in existing legacy tools that are not built for the cloud. In addition, gaps in the skills necessary for architecting a fortified cloud foundation may slow down your cloud onboarding process along with the realization of cloud business benefits.
RCG can be your extended team that equips you to tackle the transition to the cloud. RCG’s RMinder Cloud Custodian Service delivers a comprehensive, SLA-based, managed Cloud-Sec-Ops for your piece of the cloud, assuring that you enjoy the business benefits of the cloud while avoiding potential pitfalls.