Let's move on with our analysis of the ideal WAF requirements. Scale is, without a doubt, one of the most important requirements of an effective WAF. Scale has to be considered from two perspectives: under standard traffic conditions and under unusually high levels of traffic. Let's look at each one.
First, what happens with a WAF architecture under standard traffic conditions. Web application attacks don't necessarily generate a high level of traffic, so in many cases, there is no volumetric hint that could lead to think an attack is taking place. Consequently, a WAF system has to be permanently active, and this may cause some problems, as we will see now. As I explained in a previous article, WAFs inspect traffic packets and compare them with defined models to determine the likelihood of one request or sequence of requests belongs to a malicious activity, instead of being a legitimate user. This task of processing and comparing each request has a computational cost and it is common in most of commercial WAFs that this computational work takes time to be executed. In other words, the overall website or application performance is negatively affected and, in general, the end user experience is harmed precisely because the WAF is performing its tasks. More often than we would like, comments like this are quite common: "Yes, we do have a WAF, but we don't use it because it slows down our site" or "when traffic increases, the WAF introduces more delay, so we have to disable it in order to keep our application giving service". These comments, although may seem funny, are sadly true and are backed up by the research conducted by the Ponemon Institute about WAF use strategy. The survey, done in early 2015, interviewed around 600 IT professionals responsible for web application security about the status of their WAF deployment:
Let's put aside the 30% of companies that decide to accept the high risk of not using a WAF and the 2% that don't even know what they have (if you know anyone in the '2% group', please urge him or her to read this article). The data shows that only 20% of companies use a WAF on consistent basis. And therefore, there is a ...48%!, let me repeat that with a bigger font just in case it wasn't enough, ...48%! of companies that either don't use their WAF or use it only partially. The two more plausible reasons to explain this 48% are the performance degradation WAFs introduce and the difficulty to manage a WAF solution. We will talk about management in the next blog entry but for now, let's focus on WAF performance.
First thing we have to figure out is how the IT professionals prioritize security and performance. Again, the Ponemon Institute has the answer:
A quick look at chart shows that performance and security are equally important - even among security professionals! Both aspects are a priority, and ideally you shouldn't trade off among them. But this 'ideally' doesn't seem easy. Here is why.
Traditional hardware-based WAFs, or even distributed WAFs with limited scale introduce delay when they are processing requests, mostly due to architectural limitations. Individual hardware has an inherent processing capacity limitation per device, which becomes more noticeable when the equipment is getting closer to its maximum bandwidth. This problem can be partially worked around by over planning infrastructure needs, so it's guaranteed that the devices work far from their CPU thresholds that limit their capacity. However, this strategy implies an additional inefficiency cost and will struggle when traffic grows. The capacity of scale becomes crucial when it comes to decide the best WAF solution. So we see that again, the cloud-based WAF approach has a unique value. Each one of the entities that have the WAF intelligence deployed and where it is executed (potentially, each one of the 200,000 + servers that comprise that Akamai Intelligent Platform) works in a controlled mode so it never exceeds the threshold that would risk performance. The work is massively balanced across the whole platform, ensuring that individual performance of each server is not affected, so is not the overall performance. Additionally, there is an inherent performance improvement that every Akamai product, in each flavor, provides. The whole Akamai portfolio is comprised of different products that have in common, among other important things, that they improve the scale and performance of applications or websites. The outcome, as opposed to other commercial WAFs, is a distributed WAF that not only has an excellent accuracy but also improves the application or web site performance.
The second perspective we introduced at the beginning of this article was what happens with the WAF performance under high traffic conditions.
This question is often sadly overlooked. Current attack techniques are quite sophisticated. Among these complex patterns, one trend that is becoming more frequent over time is launching a denial of service attack against the infrastructure in combination with web application attacks (SQL injection, Cross Site Scripting, etc.). Since the impact of a volumetric attack is immediate, the victim will notice the consequence right after the attack is launched and will very likely focus all the attention, efforts and resources to defeat the attack. This precise moment is typically harnessed by the attacker to launch an unnoticed and disguised web application attack to steal information or deface websites.
These combined DDoS + Web Application attacks are capable of compromising the WAF efficiency and make the situation explained in the previous point worse. If the traditional WAF architecture uses to negatively affect applications' performance under standard traffic, when we find ourselves in a high traffic regime, performance degradation will not only be worse but even in some cases it may lead overload the WAF, leaving it unavailable during the attack. There are some hardware-based WAF solutions that face this problem with what I would call the 'burying head ostrich strategy', i.e., proactively disable the WAF to avoid that stress conditions overwhelm the hardware. This approach is as dangerous as the one proposed by an IT manager I met few years ago: "but, how do you want me to keep my WAF active if I am experiencing 5 times my regular traffic?"
The adequate scale, which is absolutely relevant under standard traffic conditions, becomes in this context of high traffic the only possible solution. Any security provider that can't offer a protection against a combined denial of service and web application attack, in other words, any provider that doesn't have a DDoS protection and a WAF, must be considered incomplete.
For further information, I recommend the reading of the white paper "Improving Web Application Security: The Akamai approach to WAF".