Akamai Diversity

The Akamai Blog

TOP 10 BEST PRACTICES FOR SECURING CLOUD WORKFLOWS

Cloud migration is a double-edged sword. While promising greater scalability, agility, and even new approaches to applications, it also introduces complexity--which means more security risk. Security vulnerabilities exposed in breach after breach have highlighted the risk that comes with the cloud.

Fortunately, there are some best practices that can be applied to securing cloud workflows. And while every cloud provider is unique, they all share similar characteristics. Let's take a look at the leading provider, Amazon Web Services (AWS), to examine some of the security challenges of public cloud deployments, as well as 10 proven best practices to mitigate them.

      1. RTFM. Every cloud provider has a set of recommendations for security infrastructure design and  application configuration. This typically covers security topics such as identifying, categorizing, and protecting your assets, managing access to resources, and creating users and groups, as well as suggesting ways to secure your data, operating systems, applications, and overall infrastructure in the cloud. For AWS, you can download AWS Security Best Practices as a primer, manage security checks and alerts at the AWS Security Hub, and follow updates by subscribing to the AWS security blog.

      2. Understand the shared responsibility model. AWS and other providers make it clear that security in the cloud is a shared responsibility. They are responsible for ensuring that their platforms are always on, available, up-to-date, etc. You are responsible for protecting your applications and data. In most data breaches, the fault has been with the customer―understand where their responsibility ends and yours begins.

      3. Embrace the chaos. Both internal errors, such as a misconfiguration, and external events, such as a DDoS attack, will happen. Identify weak points before they manifest to minimize damage. Increase your readiness by creating a security incident response runbook and establishing the governance, risk, and compliance models. To do this, define roles and responsibilities, response mechanisms, and service level agreement (SLA) standards―and make sure partners are accountable for their parts.

      4. Evaluate your options. Native public cloud security, DIY security, and point solutions all have their merits and flaws. Understand what makes sense to be secured with a cloud provider's solutions, and what requires a platform-agnostic approach to ensure consistent controls across a hybrid or multi-cloud architecture.

        In addition, understand your limitations in staffing and expertise, and supplement where necessary. Consider which responsibilities are best offloaded and look for a vendor with relevant experiences and customers. The right security partner can help you reduce risk and be more agile--make sure professional services and managed security services, such as a managed SOC, are options for future needs or special projects.

      5. Be a business enabler. Security should enable the business, not stifle innovation. Public cloud allows DevOps to shift left--enabling teams to spin up infrastructure more quickly, accelerate testing, and leverage new technologies such as containers and microservices―for continuous testing and deployment. This requires incorporating security earlier in the development cycle. However, a secure software development life cycle (SDLC) implementation doesn't provide full vulnerability coverage. Deploying a WAF as a primary web application runtime protection provides a safety net.

      6. Accept that temporary is the new permanent. Shortcuts in development don't always get fixed before pushing to production. For example, new AWS S3 bucket default settings don't allow public access. You can modify these settings the right way by using policies and object-level permissions, or take the shortcut by providing public access in a few clicks with the assumption that the virtual public cloud (VPC) will protect you―leading later on to new vulnerabilities. Wrong temporary practices will either remain permanent or become the new standard.

      7. Trust no one. Implementing a Zero Trust approach means that all requests should be verified--both internal and external. The goal is to ensure all the security monitoring and services are set up correctly and any violation is reported immediately to the right person.

        When it comes to AWS, the following services should be used religiously:
        • CloudTrail and CloudWatch, which covers infrastructure calls as well as user and API call activity into your system.
        • Guard Duty, which covers host communications and threat intelligence as well as network communications through VPC Flow log monitoring.
        • AWS Inspector, which covers host vulnerabilities and CIS benchmarks.
        • AWS Config and Config Rules for asset inventory and configuration compliance.
        • AWS Security Hub, or another solution like Splunk, for aggregating third-party monitoring.

        Even after you've put the appropriate controls in place, things like server-side request forgery (SSRF) can occur if your processes do not include a way to validate containers or code that your teams download from public repositories. In the case of SSRF, a robust protection method is to whitelist the DNS name or IP address that your application needs to access. If you must rely on a blacklist, be sure to validate user input properly.

      8. Guard the front door. A highly effective security measure is to control and monitor inbound and outbound traffic in order to distinguish between legitimate and illegitimate requests.

        A WAF can inspect inbound traffic for threats that could damage your site functionality or compromise data. However, a common blind spot is API traffic―organizations simply do not have visibility into what has been exposed, to whom, and what is happening with that data. Some WAFs can protect API traffic as well, while an API gateway provides a unified entry point for all API consumers.

        If internal servers are compromised, they can pose a threat to a larger network of resources--especially when attempting to steal sensitive data or communicate with command and control systems. Filtering outbound traffic by an expected list of domain names is an efficient way to secure outbound traffic from a VPC because the hostnames of these services are typically known at deployment, the list of hosts that need to be accessed by an application is small and does not change often, and hostnames rarely change.

      9. Plan for tomorrow. Whatever you do, make sure that you build in flexibility to address future requirements. Many organizations attempt to go cloud-first, only to realize that some workloads may be best performed on-premises. Many others realize that the best architecture for their businesses involves multiple cloud providers. While it may not be on your radar today, using capabilities that are also available on other platforms will future-proof your environment―especially in security or container management.

      10. Test your security posture. Periodic exercises that test your organization's security preparedness are necessary. Exercises should check for vulnerabilities in your IT systems and business processes, and recommend steps to lower the risk of future attacks. You can conduct security assessments internally or partner with a third party. For more information on how to get started, read Gartner's Actions for Internal Audit on Cybersecurity, Data Risks or AWS Security Audit Guidelines.

Use public cloud wisely

Security breaches can be extremely costly, and deploying security tools alone is not enough to stop them. Protecting your applications in the cloud requires a clear understanding of public cloud service models and known security issues. While this can require a lot of specialized knowledge, incorporating well-defined best practices can go a long way.

 

Leave a comment