Akamai Diversity

The Akamai Blog

What you need to know about BEAST

Earlier this month, our friends at the Trustworthy Internet Movement launched a new Secure Sockets Layer (SSL) dashboard called "SSL Pulse".  This tool is an excellent resource that provides one set of industry benchmarks for both Akamai and our customers. And one of the first things that we do when a tool like this is released is to try it out on our own web sites to see how we're rated in comparison to the average site.  (That is, web properties owned by Akamai, such as www.akamai.com.)

Something interesting we see from SSL Pulse is that based on their research up to 75% of the world's SSL-enabled web servers are vulnerable to BEAST.  BEAST is tool that exploits a vulnerability in the widely-used version 1.0 of the TLS (Transport Layer Security) protocol.  TLS is the successor to SSL, and while it has had this vulnerability for years, it was only last year that a tool to exploit the vulnerability was publicized.

One reason for this lag in "exploitability" could be that the logistics of getting the BEAST tool to work are fairly complicated. In order to use BEAST to eavesdrop on traffic, the attacker needs to be able to read and inject packets, and do so very quickly.  While not impossible, this potentially limits the effective targets for a BEAST attack, such as web browsers on a local network. For example it can be really effective at grabbing random traffic in a coffee shop or if thinking on a larger scale, a country forcing all Internet traffic through a limited number of network gateways.

Although it may seem that the hurdles to using BEAST diminish its dangerousness, SSL Pulse is telling us that up to 75% of SSL-enabled web sites are vulnerable. That's a pretty high exposure level, so the next logical question may be why?

In the operating system and application world, we as an industry have patch and vulnerability management down to a fairly routine process. Tools for updating are available from the vendor's website or a local patch server at regular intervals and as risks are identified.  And in the case of BEAST, there are server configurations that can limit the effect of BEAST that security expert Ivan Ristic does a pretty good job of explaining on the Qualys blog.

But unlike a software vulnerability, BEAST exploits a protocol vulnerability, which is a different story all together. When the vulnerability is in the SSL/TLS protocol (such as we've had with SSL renegotiation and BEAST), a chain of dependencies emerges including SSL libraries, web server compatibility, browser capabilities, and their respective upgrade paths.   The real fix takes place on the server side and requires an upgrade to the SSL library and a configuration change to support only TLS v1.1 and 1.2. This is where software support and compatibility issues begin to arise.

While OpenSSL, which Akamai and a substantial percentage of secure Web servers use, supports these more recent TLS versions in v1.0.0 (and beyond) of the OpenSSL library, this version of the software is not widely deployed. Lack of widespread deployment is due primarily to its lack of maturity and criticality to Web sites: if an SSL library fails, it takes the entire site with it.  OpenSSL v0.9.8w is the current version in broad use and it only supports TLS v1.0.   Akamai is in the process of integrating OpenSSL V1.0.1 into our codebase and we expect to deploy it later this year, but this means that until then, we will support TLS v1.0.

And then there is the issue of browser support.  Three popular browsers don't yet support TLS v1.1 or v1.2: Google Chrome, Mozilla Firefox, and Apple's Safari.  Microsoft Internet Explorer on Windows 7 and later and Opera both support TLS v1.1 and v1.2 and are considered "safe" from BEAST.  Since we as an industry are stuck supporting older browsers for the near-term, the interim fix is to prefer the RC4-128 cipher for TLS v1.0 and SSL v3. Ivan Ristic details the configuration for this approach on the Qualys blog.  RC4-128 is faster and cheaper in processor time, and as such, many large web sites use it as their default algorithm.   In 2011, Akamai started to report SSL connection statistics in our quarterly State of the Internet Report, and as of Q4 2011, approximately 15% of SSL/TLS negotiations on the Akamai platform use RC4, and all of the browsers negotiated at RC4-128 or higher, which seems to imply that most browsers can support the RC4 fix for BEAST.

Unfortunately, not all organizations can use RC4, eliminating one relatively easy fix. For example, FIPS-140-2 is the US Government standard for certification of cryptographic modules and only allows 3DES, AES-128, and AES-256 (along with SHA-1 and SHA-2).  If you use the certified OpenSSL FIPS-140 module, it doesn't even include RC4 as an available cipher.  FIPS-140-2 applies to all US Government configurations, HIPAA (via guidelines and standards from the Department of Health and Human Services Office of Civil Rights), and most Financial Services Institutions through their security policies.  Inside the US Government, this applies to both web servers and browsers, so there is a corner-case where a web server that only supports TLS v1.0 with RC4 will fail to negotiate with a Government user that only supports v1.0 and FIPS ciphers.

As far as compliance with the Payment Card Industry's Digital Security Standards (PCI_DSS), while there isn't a definitive list of acceptable protocols and algorithms from the PCI Council, they do list in their scanning guidelines that SSL v3 or higher should be used, a point which is fairly moot for our purposes.  In a recent poll of PCI Qualified Security Assessors (SQAs), most of them considered minimum configurations for cipher strength to be AES-128 and higher, with some saying that RC4-128 is acceptable.  Akamai has a PCI-DSS configuration tool to help our customers build secure and compliant Akamai service configurations for their site, and the tool checks that the customer's site is configured to use only AES-128, AES-256, or 3DES and SHA-1 and SHA-2.

And finally, the use of RC4 for some applications may actually introduce unexpected vulnerabilities. In the case of large-sized Web objects that require a longer SSL/TLS sessions reuse of the session key increases the opportunity to decrypt traffic making the session potentially vulnerable to other attacks against SSL traffic. 

For most dynamic sites, such as eCommerce sites with shorter sessions and less data per session, this is not a problem.  However, this can impact customers using Akamai for large downloads (movies, operating system patches, CD and DVD images, etc) and for long video streams (both on-demand and live streaming). To be clear, this is a cipher vulnerability, and the same impacts would be seen for these use cases with any origin or service provider using RC4.

So, how do we deal with BEAST? At this point, we appear to have a dilemma, but web site owners do have several options. They can:

    •    Require RC4-128 for TLS v1.0 and reject any other ciphers.  This will ungracefully reject browsers that can't use RC4.

    •    Use RC4-128 for TLS v1.0 but allow browsers to choose 3DES and AES. This will leave some risk that BEAST will be effective on some traffic where the browser selects 3DES or AES.

    •    Accept the risk of BEAST being used to intercept a limited amount of traffic and use stronger cryptographic algorithms that may be vulnerable to         this particular protocol flaw but that are more acceptable for compliance, validation,and certification purposes.

Each of these options has a tradeoff with respect to the different vulnerabilities that the site and its users are exposed to, the amount of Web site users that will be rejected by the site, and how compliant the web site will be with relevant regulatory frameworks.  Akamai's approach is to support all three of these options, and to work closely with our customers to help them understand the benefits and risks associated with each. Then, our customers can make an educated choice based of the priorities for their web sites and set their service configuration options accordingly.

Mike Smith is a security evangelist at Akamai Technologies.


Hi Michael,

First, just wanted to say this is an excellent article and it's great seeing companies like Akamai paying some attention to BEAST and other SSL/TLS vulnerabilities. Articles like this will hopefully help to encourage browser companies to finally upgrade to TLS1.2 and enable it by default (many have it available, but not enabled).

A couple of points I wanted to discuss:

>> Microsoft Internet Explorer on Windows 7 and later and Opera both support TLS v1.1 and v1.2 and are considered "safe" from BEAST.

These browsers are "safe" regarding BEAST, if and only if TLSv1.1+ is enabled (not always the case), and if the server they are connecting to supports it (many still don't - see SSLPulse).

>> Use RC4-128 for TLS v1.0 but allow browsers to choose 3DES and AES. This will leave some risk that BEAST will be effective on some traffic where the browser selects 3DES or AES.

I'd probably say "Prefer" RC4-128 here instead of "Use". This can be arranged with SSLHonorCipherOrder in Apache as mentioned in Ivan's blog and there are similar flags in other webservers.

I have yet to see a browser that doesn't support RC4-128 so the browser will send it as a supported cipher, then use it since the server will select it if it's first in the list of acceptable ciphers for the selected protocol.

It's good practise to list TLS1.2-only ciphers first so that if the connection is TLS1.2, it can use a better cipher, but for SSLv3/TLS1.0 it can fall back to RC4.

The client sends a list of ciphers it supports, and the server will say "use this cipher", so the only reason the client would use 3DES or AES on TLS1.0/SSLv3 is if it doesn't support RC4-128 and therefore doesn't send it in the list of supported ciphers.

That said, there may be some clients out there that don't support RC4-128, so if anyone knows of any, please post them here in a comment (would be great to know about!).

Hope that helps!

Best Regards,

All the major browsers now use 1/n-1 record splitting splitting to solve this issue. CBC encrypted data is split into a single byte of payload and a second TLS record with the rest of the plaintext. The unpredictable data from the first record is sufficient to make a BEAST attack impossible.

So sites can relax.

(Not every TLS client is a browser, of course, but BEAST requires a lot of control of the TLS client by the attacker and I'm not aware of any non-browser situations where that's possible.)

Hi Adam,

>> So sites can relax.

I agree - in the case where our friends and parents all keep their browsers and systems up to date with latest patches like we do ;-)

For those that don't though, or entire companies stuck with 500+ client boxes using older browsers because of upgrade path costs, I think it's the server admin's responsibility to mitigate any vulnerabilities where we can.

Best Regards,

The latest versions of the major browsers do indeed mitigate BEAST, however there are still many users out there using older browser versions. We need to quantify that somehow in order to have a better understanding of the size of the problem. I have an Apache module for passive handshake analysis -- perhaps I could extend it to detect presence of BEAST mitigation measures.