Akamai Diversity
Home > August 2013

August 2013 Archives

SEA Attacks Illustrate Need for Better DNS Security

The Syrian Electronic Army (SEA) -- a pro-Assad hacking group -- is making misery for some of the biggest entities on the Internet.

The SEA's activities have attracted plenty of media attention this week. Users couldn't access many high-profile websites Tuesday after SEA launched a targeted phishing attack against a reseller for Melbourne IT, an Australian domain registrar and IT services company. According to the IDG News Service, the attack allowed hackers to change the DNS records for several domain names including nytimes.com, sharethis.com, huffingtonpost.co.uk, twitter.co.uk and twimg.com -- a domain owned by Twitter.

"This resulted in traffic to those websites being temporarily redirected to a server under the attackers' control," the news service reported. "Hackers also made changes to the registration information for some of the targeted domains, including Twitter.com. However, Twitter.com itself was not impacted by the DNS hijacking attack."

Akamai InfoSec's CSIRT team has been monitoring the attacks. From our perspective, recent events illustrate the need for better DNS security and better awareness of spear phishing, a favorite tactic of the SEA.

Michael Kun, a security response engineer on Akamai InfoSec's CSIRT team, told me companies should be getting more serious about registry locks so the bad guys can't tamper with DNS servers. 

Domain owners can and should ask their registrars to put the registry locks in place -- something Melbourne IT did for nytimes.com and the other sites. The lock is deployed at the registry level -- with companies that administer such domain extensions as .net, .org and .com.

Kun said companies should also seek out registrars that require two-factor authentication and pressure other registrars to support two-factor authentication as well.

"Unfortunately, the problem is really with the registrars, so there's not much that customers can do directly except to vote with their dollars," Kun said.

syrian-electronic-army-lo-008.jpg




Security Ethics and The Hacker Academy

If you work outside the security community, the word "hacker" is often misunderstood. A hacker is seen as someone who operates outside the law, troublemakers who are only in the business of engineering attacks and causing chaos. Because of that misconception, I often feel the need to educate the masses.

To that end, I'd like to direct you to the blog of security company Tripwire, which has a talented team we often collaborate with. Its latest post is about what's known as The Hacker Academy, a three-year-old online ethical hacker training program designed by penetration testers to help educate young security practitioners with hands-on training.

My good friend Anthony Freed spoke with Joseph Sokoly (@jsokoly), a vulnerability engineer at MAD Security who I've written about in this blog before.

Sokoly is a rising star in the security community, and does a lot of work with The Hacker Academy. The blog post includes a video interview where he discusses the San Francisco-based academy, whose online resources are available around the clock.

Please check it out.

hero_screen_tha_video.png

Akamai FedRAMP Compliance is Huge for Security

Yesterday was a big day around here. We achieved Federal Risk and Authorization Management Program (FedRAMP) compliance as a cloud services provider. 

Big deal, you say? Why, yes. It is. 

FedRAMP is a U.S. government-wide program that standardizes the approach to security assessment, authorization, and continuous monitoring for cloud products and services. Specifically, Akamai's globally distributed, publicly shared cloud services platform has received "Provisional Authority to Operate (P-ATO)" from the FedRAMP Joint Authorization Board (JAB). 

As Akamai Public Sector VP Tom Ruff noted, "Achieving FedRAMP compliance allows public sector organizations to trust the Akamai Intelligent Platform as the foundation for their cloud computing projects, while at the same time supporting their defense-in-depth strategies. As important, FedRAMP compliance is another example of Akamai's commitment to serving the public sector and complements our DNSSEC, IPv6 and HIPAA compliant offerings, currently supporting nearly all Cabinet-level agencies."

Akamai CSO Andy Ellis said on Twitter: "The FedRAMP accreditation for @Akamai covers pretty much our entire commercial service portfolio."

The U.S. General Services Administration lists the following goals and benefits of FedRAMP on its website:

Goals:
--Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations
--Increase confidence in security of cloud solutions
--Achieve consistent security authorizations using a baseline set of agreed upon standards to be used for Cloud product approval in or outside of FedRAMP
--Ensure consistent application of existing security practices
--Increase confidence in security assessments
--Increase automation and near real-time data for continuous monitoring

Benefits:
--Increases re-use of existing security assessments across agencies
--Saves significant cost, time and resources - "do once, use many times"
--Improves real-time security visibility
--Provides a uniform approach to risk-based management
--Enhances transparency between government and cloud service providers (CSPs)
--Improves the trustworthiness, reliability, consistency, and quality of the Federal security authorization process

The Akamai InfoSec compliance and public sector staffs worked long and hard to reach this moment. For me, it's one of many examples of how dedicated people here are to making Akamai products and services secure. They were tireless and tenacious in reaching this point, and I'm honored to share the same workspace with them.

T-fedramp-logo__226x160--C-tcm245-1421469--CT-tcm245-1237012-32.png


DDoS Attacks: China's Weekend of Irony

I can't help but see irony in all the news reports this morning about China suffering one of the worst DDoS attacks it has ever seen. China is usually seen as the place where attacks begin, a perception bolstered by findings in Akamai's most recent "State of The Internet" report

Of all the reports on the weekend DDoS against China, this passage from The Wall Street Journal's article explains things best, in my opinion:

The attack, which was aimed at the registry that allows users to access sites with the extension ".cn," likely shut down the registry for about two to four hours, according to CloudFlare, a company that provides Web performance and security services for more than a million websites. Though the registry was down, many service providers store a record of parts of the registry for a set period of time, meaning that the outage only affected a portion of websites for some users.

The article quotes CloudFlare Chief Executive Matthew Prince, who said the company observed a 32-percent drop in traffic for the thousands of Chinese domains on the company's network during the attack compared with the same time 24 hours earlier. The article also notes that while China is among the best there is at carrying out attacks, it's in a much weaker position to deal with attacks that come its way. From the report:

China has one of the most sophisticated filtering systems in the world and analysts rate highly the government's ability to carry out cyber attacks. Despite this, China is not capable of defending itself from an attack, which CloudFlare says could have been carried out by a single individual.

Our most recent "State of The Internet" report fingered China as the country from which most attack traffic originated:

During the first quarter of 2013, Akamai observed attack traffic originating from 177 unique countries/regions, consistent with the count in the prior quarter. China remained the top source of observed attack traffic, 
though its percentage declined by nearly a fifth from the prior quarter. This decline is likely related to Indonesia making a sudden appearance in the second place slot, after a 30x increase quarter-over-quarter.

China topped the list in the previous "State of the Internet" report as well. At the time, SecurityWeek reported:

The fact that China remained at the top of the list isn't so surprising. Earlier this year, Mandiant released a hefty report outlining evidence its researchers had gathered linking an "overwhelming" number of cyber-attacks to China, even to a specific military group. Even the Verizon's 2013 Data Breach Investigation Report called out China for cyber-espionage and other targeted attacks. Verizon claimed China was behind 30 percent of data breaches in its report. "Looking at the full year, China has clearly had the most variability (and growth) across the top countries/regions, originating approximately 16 [percent] of observed attack traffic during the first half of 2012, doubling into the third quarter, and growing further in the fourth quarter," Akamai said.

Below is a chart from our latest report on countries that produce the most attack traffic.

akamai-attack-traffic-comparison-q42012-q12013-300x232.jpg


Mapping Networks and Data: Safety in Numbers

Last week I wrote about how redundancy of systems is an important part of Akamai's security at Planetary Scale. This post focuses on another way we keep Internet traffic flowing smoothly in the face of attempted attacks: network and data mapping.

 
Mapping isn't a security technique in itself. Every big network can be mapped out. But there is certainly a huge security benefit to it. In Akamai's case, we've mapped out every server deployed around the globe. If one goes down for any reason, we can quickly reroute traffic to other servers because we know exactly where everything is. 
In my research, I've found some good writing on how Akamai maps the Internet. One example is a blog post called "Intelligent User Mapping in the Cloud," written by Eugene Zhang, a senior enterprise architect with Akamai's Professional Services organization. The other is a report called "How Akamai Maps the Net: An Industry Perspective," written by George Economou.
 
Economou wrote in his 2010 paper:
The dynamic nature of Akamai's scalable and flexible distributed systems design relies heavily on, and benefits greatly from, the rigorous efforts invested in network mapping. Akamai's notion of network mapping is relatively broad, and is crafted into several specific methods for real-time service operation or longterm data analysis. Akamai's network presence and access to traffic provides a very unique vantage point to understand the Internet and how it is operating; these examples provide a sampling of how Akamai takes advantage of this information for very specific purposes. Whatever shapes the Internet morphs into in the future, you can bet that Akamai will be present and will have new ways of mapping it.
Doing so seems complex when you consider the size of the operation. As of 2010, he noted, we had over 60,000 servers deployed in about 1,400 data centers on about 900 networks worldwide. Geographically, these data centers were in about 650 cities in 76 countries around the world. 
I look at this as a case study in the concept of safety in numbers. If you walk around dangerous neighborhoods in a big city by yourself, you're going to be defenseless against attackers waiting around the corner. If you have other people with you, you become a much tougher target and are more likely to be left alone. 
In the case of the Internet, there's safety in numbers for the technology deployed to route traffic. If we only had a few servers deployed in a couple countries, it would be much easier to do serious damage to the flow of Internet traffic. But our technology is so spread out and numerous that the traffic is unstoppable.
That's especially the case because of our mapping process. If one guy goes down in a fight, we know exactly where the reinforcements are and can deploy then quickly.
url.jpeg

DDoS Attacks Used As Cover For Other Crimes

Protecting customers from DDoS attacks is an Akamai InfoSec specialty. When we see DDoS attempts against our customers, the typical thinking is that someone is doing it to force sites into downtime, which can cost a business millions in lost online sales. 

But sometimes, these attacks are simply a cover operation to distract the victim while something else is going on. 

A story that caught our attention in SC Magazine and elsewhere drives home the point. The article, published Wednesday, explains how the bad guys have stolen millions from U.S. banks while distracting the victims with DDoS activity. From the article:

Criminals have recently hijacked the wire payment switch at several US banks to steal millions from accounts, a security analyst says. Gartner vice president Avivah Litan said at least three banks were struck in the past few months using "low-powered" distributed denial-of-service (DDoS) attacks meant to divert the attention and resources of banks away from fraudulent wire transfers simultaneously occurring. The loses "added up to millions [lost] across the three banks," she said. "It was a stealth, low-powered DDoS attack, meaning it wasn't something that knocked their website down for hours."

The story has gotten the attention of other publications as well. From CNet's article on the subject:

Security researchers have previously highlighted the growing trend of using DDoS attacks to hide fraudulent activity at banks. Dell SecureWorks Counter Threat Unit issued a report (PDF) in April that warned that a popular DDoS toolkit called Dirt Jumper was being used to divert bank employees' attention from attempted fraudulent wire transfers of up to $2.1 million.

Though Litan's write-up on the Gartner website has generated a lot of fresh attention, these kinds of attacks aren't all that new. Nearly a year ago, the threat was outlined in a joint paper from the FBI, Financial Services Information Sharing and Analysis Center (FS-ISAC) and the Internet Crime Complaint Center (IC3). The Sept. 17, 2012 alert said, among other things:

Recent FBI reporting indicates a new trend in which cyber criminal actors are using spam and phishing e-mails, keystroke loggers, and Remote Access Trojans (RAT) to compromise financial institution networks and obtain employee login credentials. The stolen credentials were used to initiate unauthorized wire transfers overseas. The wire transfer amounts have varied between $400,000 and $900,000, and, in at least one case, the actor(s) raised the wire transfer limit on the customer's account to allow for a larger transfer. In most of the identified wire transfer failures, the actor(s) were only unsuccessful because they entered the intended account information incorrectly.

Litan offered some additional advice:

"One rule that banks should institute is to slow down the money transfer system while under a DDoS attack," she wrote. "More generally, a layered fraud prevention and security approach is warranted."

Below: This graphic, from the latest Akamai State of the Internet report, shows which sectors are most impacted by DDoS attacks. 

Figure 05 Q1 2013.jpg


See You at Edge 2013!

Since our founding, Akamai has been at the vanguard of the Internet revolution.  And as we prepare to celebrate our 15th anniversary this month, our spirit of innovation and our desire to solve the most difficult Internet challenges are just as strong today as they were 15 years ago.

From day one, we have worked hard to gain an understanding of how our customers want to use the Internet to make their businesses be more agile, more customer-centric, and more profitable.  And we use that understanding to guide our innovation and to invent new solutions to help make our customers' visions become a reality.

This October, we'll be gathering in Washington, DC for our 6th annual customer conference--Akamai Edge.  Edge is one of my favorite events because I get to hear from our customers about how they're using Akamai solutions to help deliver on the promise of the Cloud.  Edge is more than a conference; it's become a forum for Internet visionaries from a range of industries and regions to gather and share their secrets to online success.

Our customer base now includes 96 of the Internet Retailer 100 companies, 7 of the top 10 global banks, 19 of the top 20 hotel brands, and more than one-third of the Global 500 companies.  And many of the leaders from those organizations will be on-hand at Edge to share their ideas for pushing the pace of innovation in a hyperconnected world.

This year, we will hear from industry thought leaders such as Fedex CIO Rob Carter, Security Expert Bruce Schneier, IT visionary and author Gene Kim, and many others from organizations like eBay, Visa, CBS Interactive, and IBM.

I look forward to seeing you in DC and to understanding how we can help improve your business in the rapidly-changing and increasingly-complex online world.

Tom Leighton

Chief Executive Officer, Akamai

Ten Years After the Blaster Worm

This month marks the 10th anniversary of Blaster -- a worm that tore a path of disruption across the Internet. It struck a few months before I started writing about information security. But even then I was well aware that something big had happened. 

I was editing for a daily newspaper at the time and had no idea what patch management, software vulnerabilities and malware were. But Blaster was a big enough deal to make the front page of my paper. 
Within 10 months I'd get a crash course. In fact, my first day as a security journalist happened to be the third day of attack from another worm called Sasser. An analysis of Sasser was the first article I ever wrote about anything having to do with InfoSec. Look at the writing quality and you can see how green, nervous and unsure I was. But it wouldn't be long before I was writing daily stories about the latest worms and other malware. 
Every story compared the latest worm to the likes of Blaster. It was the monster all who followed were measured against.
Interestingly, one of the companies I often quoted during worm outbreaks was Akamai. Back then almost nobody thought of Akamai as a security player, but if a serious worm outbreak was clogging up Internet traffic, the company had a ring-side seat -- a vantage point like no other.
There are a lot of articles about Blaster's 10th anniversary. The best I've read thus far is this one in CSOonline. In the interest of full disclosure, that's where I worked before coming to Akamai. It's written by Aaron Turner, who was a security strategist at Microsoft when Blaster struck. The pressure Turner felt back then is clear from the intro:
10 years ago, I had a life-altering work experience. I was on the team at Microsoft that was trying to solve 2 huge problems:
 

--2 Billion computers had been infected with a self-replicating virus (AKA 'worm') now known as Blaster.

--The NE Power Outage was, for a period of time and by some people, attributed to Blaster.

There are many of my former colleagues who spent literally a year of their lives working with me to fix the aftermath of these problems. There are more friends with whom I later worked with at the Idaho National Lab (INL) that helped me understand the breadth of the problem that was uncovered by Blaster, specifically the reliance of critical infrastructure upon consumer-grade technologies.

From my perch as a newspaper editor, I remember all the major news outlets speculating that Blaster was connected to the blackout. I've heard theories in the years since then, though I haven't seen solid proof of a connection.

Also see: "2003 Blackout: An Early Lesson in Planetary Scale?"

The big thing that strikes me as I look back is how rapidly the threat landscape has changed. In the beginning the big news always involved worm outbreaks like Sasser and Mytob. First a big vulnerability would be revealed on Patch Tuesday and then someone would exploit it with malware. Then the trend shifted from covering that to chasing the latest data breach. 

From early 2005 onward, every time a company announced it had suffered a breach, reporters like me would have to drop everything and chase it. Eventually, breaches were announced so often that it ceased to qualify as breaking news. Then the trend shifted to such things as hacktivism and the rise of cloud insecurity. The one constant along the way has been the challenge of regulatory compliance, from HIPAA to Sarbanes-Oxley and PCI DSS.

Also see: "What's New In Security? Nothing."

Now I'm part of Akamai InfoSec, seeing a variety of threats and defensive measures up close. The daily grind usually involves tracking and blunting the latest DDoS attacks targeting our customers. 

I'm not 100 percent certain about what's next, but I suspect the next 10 years will be just as interesting -- if not more so -- than the last 10.

2106341_f260.jpg

 

Microsoft's August Patch Matrix

Microsoft released it's monthly patch load this week. To help identify and deploy the security fixes, here's a table showing the different bulletins, the severity of the flaws, and the products impacted.


Bulletin IDBulletin Title and Executive SummaryMaximum Severity Rating and Vulnerability ImpactRestart RequirementAffected Software
MS13-059Cumulative Security Update for Internet Explorer (2862772

This security update resolves eleven privately reported vulnerabilities in Internet Explorer. The most severe vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Internet Explorer. An attacker who successfully exploited the most severe of these vulnerabilities could gain the same user rights as the current user. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.
Critical 
Remote Code Execution
Requires restartMicrosoft Windows, 
Internet Explorer
MS13-060Vulnerability in Unicode Scripts Processor Could Allow Remote Code Execution (2850869)

This security update resolves a privately reported vulnerability in the Unicode Scripts Processor included in Microsoft Windows. The vulnerability could allow remote code execution if a user viewed a specially crafted document or webpage with an application that supports embedded OpenType fonts. An attacker who successfully exploited this vulnerability could gain the same user rights as the current user. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.
Critical 
Remote Code Execution
May require restartMicrosoft Windows
MS13-061Vulnerabilities in Microsoft Exchange Server Could Allow Remote Code Execution (2876063)

This security update resolves three publicly disclosed vulnerabilities in Microsoft Exchange Server. The vulnerabilities exist in the WebReady Document Viewing and Data Loss Prevention features of Microsoft Exchange Server. The vulnerabilities could allow remote code execution in the security context of the transcoding service on the Exchange server if a user previews a specially crafted file using Outlook Web App (OWA). The transcoding service in Exchange that is used for WebReady Document Viewing uses the credentials of the LocalService account. The Data Loss Prevention feature hosts code that could allow remote code execution in the security context of the Filtering Management service if a specially crafted message is received by the Exchange server. The Filtering Management service in Exchange uses the credentials of the LocalService account. The LocalService account has minimum privileges on the local system and presents anonymous credentials on the network.
Critical 
Remote Code Execution
May require restartMicrosoft Server Software
MS13-062Vulnerability in Remote Procedure Call Could Allow Elevation of Privilege (2849470)

This security update resolves a privately reported vulnerability in Microsoft Windows. The vulnerability could allow elevation of privilege if an attacker sends a specially crafted RPC request.
Important 
Elevation of Privilege
Requires restartMicrosoft Windows
MS13-063Vulnerabilities in Windows Kernel Could Allow Elevation of Privilege (2859537)

This security update resolves one publicly disclosed vulnerability and three privately reported vulnerabilities in Microsoft Windows. The most severe vulnerabilities could allow elevation of privilege if an attacker logged on locally and ran a specially crafted application. An attacker must have valid logon credentials and be able to log on locally to exploit these vulnerabilities. The vulnerabilities could not be exploited remotely or by anonymous users.
Important 
Elevation of Privilege
Requires restartMicrosoft Windows
MS13-064Vulnerability in Windows NAT Driver Could Allow Denial of Service (2849568)

This security update resolves a privately reported vulnerability in the Windows NAT Driver in Microsoft Windows. The vulnerability could allow denial of service if an attacker sends a specially crafted ICMP packet to a target server that is running the Windows NAT Driver service.
Important 
Denial of Service
Requires restartMicrosoft Windows
MS13-065Vulnerability in ICMPv6 could allow Denial of Service (2868623)

This security update resolves a privately reported vulnerability in Microsoft Windows. The vulnerability could allow a denial of service if the attacker sends a specially crafted ICMP packet to the target system.
Important 
Denial of Service
Requires restartMicrosoft Windows
MS13-066Vulnerability in Active Directory Federation Services Could Allow Information Disclosure (2873872) 

This security update resolves a privately reported vulnerability in Active Directory Federation Services (AD FS). The vulnerability could reveal information pertaining to the service account used by AD FS. An attacker could then attempt logons from outside the corporate network, which would result in account lockout of the service account used by AD FS if an account lockout policy has been configured. This would result in denial of service for all applications relying on the AD FS instance.
Important 
Information Disclosure
May require restartMicrosoft Windows

2003 Blackout: An Early Lesson in Planetary Scale?

On the drive to work this morning, I listened to a report about this being the 10th anniversary of the massive blackout that plunged an area from New York City to Toronto into darkness. I immediately thought of a post Akamai CSO Andy Ellis wrote recently called "Environmental Controls at Planetary Scale."

It might be overreaching to say the 2003 blackout was an early case study in the success and failures of controls at Planetary Scale. Andy was talking about the environmental controls in data centers around the world. The blackout wasn't something individual data centers had much control over, and the power failure was geographically limited to a section of the U.S. and Canada. The blackout's root cause was a software glitch in an alarm system inside one of FirstEnergy Corp.'s control rooms in Ohio. Workers apparently didn't realize they needed to redistribute power after overburdened transmission lines collapsed onto overgrown trees. A manageable local blackout thus snowballed into widespread electric grid failure.

Still, I can't help but think of the parallels. Andy's blog post examined the pros and cons of investing large sums of money in data center environmental controls. He wrote: 

Is the cost worth the hassle? If you run one data center, then the costs might worthwhile - after all, it's only a few capital systems, and a few basis point improvements in MTBCF will likely be worth that hassle (both in operational false positives as well as deployment cost). But what if you operate in thousands of data centers, most of them someone else's?  The cost multiplies significantly, but the marginal benefit significantly decreases - as any given data center improvement only affects such a small portion of your systems.  Each data center in a planetary scale environment is now as critical to availability as a power strip is to a single data center location.  Mustering an argument to monitor every power strip would be challenging; a better approach is to have a drawer full of power strips, and replace ones that fail.

I see lessons here in how we manage interconnected electrical systems where a failure in one place can spill over to many other places the world over. Security experts have said and written much in recent years about the threat to global power grids. Among other things, they've warned, a hacker could compromise SCADA controls in one power station and maximize the damage if the target is the weak link in a much bigger chain of power distribution centers.

The ways in which we manage the threat carry similar pros and cons to that of the environmental control management Andy wrote about.

On this particular anniversary, I throw it out there as food for thought.

us_overflight.jpg




How Akamai eDNS Protects Against DNS Attacks

Andy Ellis's recent post "DNS Reflection Defense" describes how DNS works and lists general guidelines for defending against DNS attacks. This post continues the discussion of DNS protection by describing how Akamai's "eDNS" offering protects customers from both volumetric and reflective attacks on DNS infrastructure.

What is a Volumetric Attack?
In a volumetric attack, a attacker uses a BotNet to generate a large volume of DNS requests. The attacker's goal is to take down the target web site by taking down their DNS infrastructure. A variant of this attack uses spoofed IP addresses to defeat IP address-based access control.  Brobot has used such tactics against financial institutions, particularly during Phase II of their attacks.

How  Akamai eDNS Defends Against Volumetric Attacks
Akamai eDNS defends against volumetric attacks through excess capacity, rate controls, and a positive security model.  Akamai's DNS system is one of the largest in the world. Normal traffic served by Akamai's DNS system is less than 1 percent of total capacity.  Akamai eDNS also provides rate limiting per IP and per request type.  Requests from specific IP addresses can be limited to pre-defined thresholds, and rate thresholds can be set lower for commonly used DDoS request types such as ANY and DNSSEC.  Finally, eDNS can fall back to a positive security model in the rare event that higher rate limiting thresholds are crossed.  In this case, eDNS will prioritize traffic from a list of less than 1m named, known, "good" servers. These servers cover 95 percent of all known DNS traffic.  The positive security model can effectively mitigate a vector in which the attacker spoofs IP addresses.

What is a Reflection Attack?
In a reflection attack, an attacker makes a request to the open resolver using a UDP packet whose source IP is the IP address of the target.  The request is usually one that will result in a large response, such as a DNS ANY request or a DNSSec request, which allows the attacker to multiply up to 100x the amount of bandwidth sent to the target web server.  The "multiplication" factor is what makes this particular attack dangerous, as traffic can reach up to 200- 300Gbps.   The Spamhaus attack is one example of a recent reflection attack.

How Akamai eDNS Defends Against Reflection Attacks
Akamai eDNS defends against reflection attacks first by using specialized rate limiting on the ANY and DNSSec requests, just as it does in volumetric attacks, this ensures the eDNS is not used as a reflector. As important, because the customer has outsourced DNS to Akamai, they can effectively reject all incoming traffic to their data center on port 53 since DNS resolutions are handled by eDNS. The customer may even choose to block port 53 at the ISP level thus ensuring that their connectivity to the internet is not saturated.

Many steps can and should be taken to promote internet hygiene and reduce the effectiveness of DNS attacks. Until those steps are taken, customers can rely on Akamai eDNS to protect their infrastructure and ensure their websites are accessible to legitimate users.

The recently disclosed BREACH vulnerability in HTTPS enables an attack against SSL-enabled websites. A BREACH attack leverages the use of HTTP-level compression to gain knowledge about some secret inside the SSL stream, by analyzing whether an attacker-injected "guess" is efficiently compressed by the dynamic compression dictionary that also contains the secret. This is a type of an attack known as an oracle, where an adversary can extract information from an online system by making multiple queries to it.

BREACH is interesting in that it isn't an attack against SSL/TLS per se; rather, it is a way of compromising some of the secrecy goals of TLS by exploiting an application that will echo back user-injected data on a page that also contains some secret (a good examination of a way to use BREACH is covered by Sophos). There are certain ways of using HTTPS which make this attack possible, and others which merely make the attack easier.

Making attacks possible

Impacted applications are those which:

  • Include in the response body data supplied in the request (for instance, by filling in a search box);
  • Include in the response some static secret (token, session ID, account ID); and
  • Use HTTP compression.

For each of these enabling conditions, making it untrue is sufficient to protect a request. Therefore, never echoing user data, having no secrets in a response stream or disabling compression are all possible fixes. However, making either of the first two conditions false is likely infeasible; secrets like Cross-Site Request Forgery (CSRF) tokens are often required for security goals, and many web experiences rely on displaying user data (hopefully sanitized to prevent application injection attacks). Disabling compression is possibly the only "foolproof" and straightforward means of stopping this attack simply - although it may be sufficient to only disable compression on responses with dynamic content. Responses which do not change between requests do not contain a user-supplied string, and therefore should be safe to compress.

Disabling compression is likely to be expensive - some back-of-the-envelope numbers from Guy Podjarny, Akamai's CTO of Web Experience, suggest a significant performance hit. HTML compresses by a factor of around 6:1 - so disabling compression will increase bandwidth usage and latency accordingly. For an average web page, excluding HTML compression will likely increase the time to start rendering the page by around half a second for landline users, with an even greater impact for mobile users.

Making attacks easier

Applications are more easily attacked if they:

  • Have some predictability around the secrets; either by prepending fixed strings, or having a predictable start or end;
  • Are relatively static over time for a given user; and
  • Use a stream cipher.

This second category of enablers presents a greater challenge to evaluate solutions. Particularly challenging is the question of how much secrecy is gained and at what cost from each of them. 

Altering secrets between requests is an interesting challenge - a CSRF token might be split into two dynamically changing values, which "add" together to form the real token (x * y = CSRF token). Splitting the CSRF token differently for each response ensures that an adversary can't pin down the actual token with an oracle attack. This may work for non-human parseable tokens, but what if the data being attacked is an address, phone number, or bank account number? Splitting them may still be possible (using JavaScript to reassemble in the browser), but the applications development cost to identify all secrets, and implement protections that do not degrade the user experience, seems unachievable.

Altering a page to be more dynamic, even between identical requests, seems possibly promising, and is certainly easier to implement. However, the secrecy benefit may not be as straightforward to calculate - an adversary may still be able to extract from the random noise some of the information they were using in their oracle. A different way to attack this problem might not be by altering the page, but by throttling the rate at which an adversary can force requests to happen. The attack still may be feasible against a user who is using wireless in a cafe all day, but it requires a much more patient adversary.

Shifting from a stream cipher to a block cipher is a simple change which increases the cost of setting up a BREACH attack (the adversary now has to "pad" attack inputs to hit a block size, rather than getting an exact response size). There is a slight performance hit (most implementations would move from RC4 to AES128 in TLS1.1).

Defensive options

What options are available to web applications?

  • Evaluate your cipher usage, and consider moving to AES128.
  • Evaluate whether supporting compression on dynamic content is a worthwhile performance/secrecy tradeoff.
  • Evaluate applications which can be modified to reduce secrets in response bodies.
  • Evaluate rate-limiting. Rate-limiting requests may defeat some implementations of this attack, and may be useful in slowing down an adversary.

How can Akamai customers use their Akamai services to improve their defenses?

  • You can contact your account team to assist in implementing many of these defenses, and discuss the performance implications.
  • Compression can be turned off by disabling compression for html objects. The performance implications of this change should be well understood before you make it, however (See the bottom of this post for specifics on one way to implement this change, limited only to uncacheable html pages, in Property Manager).
  • Rate-limiting is available to Kona customers.
  • Have your account team modify the cipher selections on your SSL properties.

Areas of exploration

There are some additional areas of interest that bear further research and analysis before they can be easily recommended as both safe *and* useful.

  • Padding response sizes is an interesting area of evaluation. Certainly, adding a random amount of data would at least help make the attack more difficult, as weeding out the random noise increase the number of requests an adversary would need to make. Padding to multiples of a fixed-length is also interesting, but is also attackable, as the adversary can increase the size of the response arbitrarily until they force the response to cross an interesting boundary. A promising thought from Akamai's Chief Security Architect Brian Sniffen is to pad the response by number of bytes based on the hash of the response. This may defeat the attack entirely, but merits further study.
  • An alternative to padding responses is to split them up. Ivan Ristic points us to Paul Querna's proposal to alter how chunked encoding operates, to randomize various response lengths.
  • It may be that all flavors of this attack are HTTPS responses where the referrer is from an HTTP site. Limiting defenses to only apply in this situation may be fruitful - for instance, only disabling HTML compression on an HTTPS site if the referrer begins with "http://". Akamai customers with Property Manager enabled can make this change themselves (Add a rule: Set the Criteria to "Match All": "Request Header", "Referer", "is one of", "http://*" AND "Response Cacheability", "is" "no_store"; set the Behaviors to "Last Mile Acceleration (Gzip Compression)", Compress Response "Never". This requires you to enable wildcard values in settings.).

crossposted at csoandy.com.

Microsoft Security Patches Coming Tomorrow

Tomorrow is the second Tuesday of the month, which those of us in security know as Patch Tuesday -- the day Microsoft unloads its security updates. It's an important calendar item for Akamai customers, given how dominant Windows machines are in many companies.

Late last week, Microsoft offered a preview of what to expect. What follows is a chart showing the number of bulletins planned, along with the severity and products affected.

Bulletin IDMaximum Severity Rating and Vulnerability ImpactRestart RequirementAffected Software
Bulletin 1Critical 
Remote Code Execution
Requires restartMicrosoft Windows, 
Internet Explorer
Bulletin 2Critical 
Remote Code Execution
May require restartMicrosoft Windows
Bulletin 3Critical 
Remote Code Execution
May require restartMicrosoft Server Software
Bulletin 4Important 
Elevation of Privilege
Requires restartMicrosoft Windows
Bulletin 5Important 
Elevation of Privilege
Requires restartMicrosoft Windows
Bulletin 6Important 
Denial of Service
Requires restartMicrosoft Windows
Bulletin 7Important 
Denial of Service
Requires restartMicrosoft Windows
Bulletin 8Important 
Information Disclosure
May require restartMicrosoft Windows

DefCON Observations from a First-Timer

In April of this year, InfoSec launched a new team called Customer Compliance. Several senior InfoSec employees joined its ranks, and I was hired into the team. My name is Meg Grady-Troia, and I'm a member of Akamai's Customer Compliance team because I am an anthropologist, an educator, and a writer. My job is finding creative and effective ways to begin sharing Akamai's security posture and platform with our customers, and to answer the questions that our sales team is constantly fielding as they do their great work. That means that I am learning InfoSec and Akamai culture as fast as I can, only ever a step or two ahead of the questions I'm answering. It's great fun, and also a great challenge.
Here at Akamai, we call the on-boarding process "drinking from the firehose." With the blessing of the InfoSec department, I'll be sharing some of that firehose process with you, starting with excerpts from my blog about my experiences at one of the country's biggest hacker and security conventions, DefCON.


The Lion Sleeps Tonight: Preparing for DefCON
As I began preparations for my trip to DefCON, co-workers and peers gave me all manner of advice. As with all advice, some of it was contradictory, some of it was impossible, and some of it was indispensible. The advice about where to eat (taco stand reviews forthcoming, hopefully), what the Strip's environment is like (over-oxygenated air, brutal sun, great pools), and who I should try to meet (everyone!) was easily assimilated, but as the feedback about safety and security rolled in, it was hard not to panic.

Some of the things I was told were:
  • all traffic in Las Vegas is monitored, no network (even with VPN or Tor) is secure;
  • all data traffic on mobile devices is insecure & 4G is easy to sniff;
  • all power outlets might be transmitting more than power or tampered with to damage equipment;
  • all public places may be scanned for RFID tags, compromising my identity or finances;
  • not to identify my employer or my job;
  • not to travel alone;
  • not to accept drinks from anyone;
  • not to bring electronics that had any controlled or secret data; and
  • most hotel rooms are bugged.

Most of these things could be true most of the time, in fact: I understand that "safety" is an absolute rather than an actual possibility. I take risks every day; the more complex and valuable the actions I am taking are, the higher the risks are, too. Even so, one co-worker likened DefCON as "walking into the lion's den."  Lions are dangerous, but in predictable ways. And, he added graciously, "only lions walk into lion's dens most of the time."

My boss gave me the best advice, though, and it is advice that is relevant to all Security professionals and amateurs: decide what your risk tolerance is, know what your powers of protection are, understand what vulnerabilities are inevitable, calm down.

His advice was to find my own tolerance for risk and my own security posture, rather than to blindly follow the precautions that my peers find valuable. But it was also like the serenity prayer for Security: "SuperUser grant me the powers to protect the resources that I can, the serenity to accept the risks I cannot mitigate, and the wisdom to know the difference."

As I worked to assemble the supplies I knew I wanted -- a burner laptop so that I could use the Akamai VPN and not risk exposing the compliance data that lives on my usual work machine, an RFID-blocking wallet to hold my credit cards and ID, a battery charger for my cellphone for long days at the Con, extra sunscreen for my pale, freckle-prone skin -- I worked to build a working model of what I wanted to take away from DefCON and why I was attending.

It turned out to be pretty easy: I want to know if I'm a lion, too. The opportunity to walk into rooms full of brilliant people who care deeply about testing the limits of our social contracts and agreements, who live on the Internet where the normal boundaries and borders of our geo-political world are blurred, and who are more deeply committed to the cycle of build-and-break than many people in this world is a great one.

It's not clear to me yet if I will be a lion, a lion-tamer, or just a sheep in lion's clothes, but I know  am excited. I may not have prepared as well as some of my peers did, but I have a marked up schedule, a gorgeous badge of my own, and I am ready to learn.

DefCON Day 1: The Lay of the Land
Between talks, the hallways of the convention center fill with slow-moving streams of people walking between rooms. The hallways aren't ever empty, though, even when the scheduled Con events are long over for the day. At 3am, there are still parties, contests, and social events happening all over the conference center, not to mention the flocks of people at every bar nearby.

The rooms of the convention center come in a few main flavors:
  • The "Tracks:" where talks take place on every subject from new 0days in routers to the ethics of working for the Feds;
  • The "Villages:" where people practice skills and offer demos in social engineering, lockpicking, electronics tampering, and more;
  • The "Contests:" where folks play Capture the Flag, and myriad other games, including Hacker Jeopardy; and
  • The "Lounges:" where DJs play, art installations blink and move, and folks congregate with coffee or beer.
I am over-simplifying this slightly for the sake of clarity, as there are at least as many places where those distinctions are blurred, if not lost.

There is no shortage of folks who seem to spend all their time in just one of those places: hanging out in the chill out café, the lockpick village, or the vendor room. Neither is there a shortage of people who never make it into any of the rooms because they find strangers and friends in the hallways and stop to talk or hack together. Folks tend to refer to this practice of targeted socializing as HallwayCon or LobbyCon.

With over 13,000 attendees, the Con has its own fleet of volunteers and organizers who check badges, enforce physical security, help speakers manage time and equipment, sell merchandise, and answer a million odd questions. All these folks are called the "Goons," and they wear shirts that identify them to the crowd. Despite the strong currents of anti-establishment and independence in the attendees that I met, I saw nothing but smiles and respect for the Goons, their work appeared to be as much about social cohesion as enforcement. One of the traditions that amused me most was that every new speaker was interrupted by Goons with a bottle of bourbon and toasted by the Goons before being allowed to complete their talk.

DefCON isn't one single community, though, and I met people whose affiliations varied wildly. Attendees are breakers, builders, government employees, and Fed-haters, just to cite some of the most-discussed differences. Diversity in other directions was more limited, though, and I saw many more white people than people of color. I saw more women, more kids, and more binary-breakers than I had been led to expect, though, which was a treat. Despite a million jokes about the "uniform" of the Con being jeans and a black t-shirt, there were plenty of creative costumes and innumerable blinky LED and EL wire accessories.

One of the most interesting accessories of the DefCON attendees are the intricate badges. Wired posted an article when this year's were revealed, which you can read here. The badges are part of a suite of branding materials that come complete with puzzles to solve and a contest to win for the final decryption. This year, the badges are heavy plastic designed to look like playing cards with the traditional suits replaced by 4 hacker media: phone (phreaking and communication), key (cryptology and building), disk (code and data), and jolly roger (piracy and breaking). The badges also had numerical codes, circuit diagrams, kanji, and other forms of communication on them. The branding carried through to the programs, art on the walls and floors, and installations in many of the conference rooms and lounges.

DefCON happens outside of the conference spaces, too, at parties sponsored by various groups, ranging from hacker consortiums to big companies. The parties happen all over Las Vegas, taking over fancy suites, restaurants, pools, bars, and more. Getting into parties is as much as contest as any of the official ones: party entrance schemes involved solving riddles to find parties, being given small trinkets that granted access to parties, social engineering your name on to secret lists, or being physically being tagged by folks with stamps, markers, or cans of colored hairspray. Lest that sound too much like some hyperbolic movie representation, let me also tell you that the CON is also full of recruiters, full of folks disillusioned with the revelations of the last few years (wikileaks, Snowden, PRISM, etc.) looking for folks who might be able to save them, and more than a few folks who are there to sell something.

In other words, whatever it may have been in the past, DefCON, at 21 years of age, is old enough and big enough to be many different kinds of events at once, and the selection I saw has more to do with the people I met through my co-workers, the talks I attended, the shuttles I rode between hotels, and the contests in which I participated than with the nature of the event. As one colleague told me "DefCON is what you make of it."

Meg Grady-Troia is a program manager with the InfoSec team at Akamai
Those who know me are aware of my fondness for Follow Friday -- a tradition on Twitter where people recognize the folks whose tweets keep them inspired and informed. In my case, the focus is on people in the InfoSec community. I have a list on Twitter that will show you 275 security pros I currently follow. You can see their bios and press the follow button on those you think might be of value.

Below is a list of Twitter handles worth following for consistently great debate and resource sharing specifically from the people of Akamai InfoSec. I've known a few of them for years and am getting acquainted with the rest. All have inspired me thus far. Follow them and you will be inspired, too.

Let's start with me: Bill Brenner (@BillBrenner70) -- Security scribe, family man, author of The OCD Diaries and Akamai InfoSec's resident storyteller · theocddiaries.com

Brian Sniffen (Brian_Sniffen) -- Chief security architect, http://packets.evenmere.org/

Akamai InfoSec (Akamai_InfoSec) -- The official twitter profile for Akamai's InfoSec department

George The Penguin (@SecurityPenguin) -- The Akamai Penguin of Awesome. My tweets aren't even my own, let alone my employer's. Globe waddling · securitypenguin.com

Andy Ellis (@csoandy) -- Akamai CSO, Parent, Bostonian, Oenophile, Patriots Fan, personal stylist, FCSP, FiveFinger runner. Tweets my own.
Cambridge, MA · csoandy.com

Larry Cashdollar (@_larry0-- Husband & Father. Works @Akamai. Hobbyist Vulnerability Researcher & Exploit Coder.  New Hampshire, USA · vapid.dhs.org 

Christian Ternus (@ternus) -- Security researcher @Akamai. Performing a timing channel attack on the computational ultrastructure of spacetime.
Cambridge, MA · cternus.net

Michael Smith (@rybolov) -- Akamai's CSIRT director and international man of mystery.  Boston, Ma. · guerilla-ciso.com

Joshua Corman (@joshcorman) -- Security Strategist/Ex-Analyst/Knowledge Seeker/Zombie Killer/Co-Founder of http://RuggedSoftware.org  / Statements are mine & may not reflect Akamai's. It depends · blog.cognitivedissidents.com

Martin McKeay (@mckeay) -- Blogger, podcaster, Akamai Security Evangelist.  I never thought I'd say, I wasn't paranoid enough. My opinions are my own, end of statement. Santa Rosa, CA · mckeay.net

Kathryn Kun (@theladykathryn) -- Program manager for Akamai InfoSec · weirdsistersblog.com (Not a security blog, but worth your time all the same.)

Darius Jahandarie (@djahandarie) -- Haskell, Agda, Math, Security, 日本語, Cambridge MA, USA · althack.org

James Salerno (@minion_at_work) -- Program manager, Akamai InfoSec

Daniel Franke (@dfranke) -- Security researcher. Keeping the internet safe for anarchy. Central Massachusetts

Kevin Riggle (@kevinriggle) -- Security researcher, Akamai InfoSec . free-dissociation.com

Dave Lewis (@gattaca) -- Akamai InfoSec evangelist, security type, #blogger, podcaster, breaker of things, bass player, dad, #infosec #smartgrid, #cloud, defcon goon, creator of (-:|3 emoticon. I love my job. Canada · liquidmatrix.org/blog/

follow-friday.png

Federation Explained

I will start this blog entry with a disclaimer: there are many definitions out there for CDN Federation, most are feasible but many are just not as practical and/or easy to implement as advertised.  All you Trekkies that came here because of a Google alert about federation, sorry... we are talking about content delivery done seamlessly between two or more differing entities, not the United Federation of Planets.

To cut through all of the noise about CDN Federation, let's begin with the two traffic flow scenarios that make up CDN Federation.  Outbound and Inbound which we will call Termination - and to be clear, this is not network packet flow which happens in both directions in either case, but rather content origination and location where content is consumed.  The below diagrams will help illustrate what I mean.

Outbound_Federation.png 

Outbound CDN Federation - This is a widely deployed type of Federation and refers to when an Operator who owns some content, a network, and a bunch of subscribers needs to have greater reach for that content than the Operator's network allows.  The Operator may also need some level of excess capacity or redundancy for their content and subscribers.  This could be on a local or global level.  The reasons for Federating are discussed a bit later, but for now you can think of Outbound Federation as being done in order to distribute content.

 

 Termination_Network_Offload.png

 

Termination or Inbound CDN Federation - This type of CDN Federation is done by Operators who are trying to localize and manage traffic. The traffic originates in a different operator's network but is consumed by the subscribers of the Operator deploying the Inbound Federation.  Once again, the reasons for implementing this type of CDN Federation are discussed a bit later but for now you can think of Termination as being deployed to manage network traffic.

Now let's get to the reason as to why one would need Federation or Termination.  The few practical reasons listed below are not applicable to everyone, but Operators will find at least one or more of these reasons appealing in order to become part of a Federation.

1. Global Reach (Outbound Federation) - this is the sexiest of the advertised reasons for Federating.  Think of a perfect world where any Content Provider (CP) can reach any subscriber regardless of location.  This is indeed a very good reason to federate your CDN.  Much like your mobile phone service, where you can make calls and send text messages from pretty much anywhere to anyone. Federating allows roaming subscribers of a major cable operator or telecom to view their home Operator's content on any network, even a competitor.   This roaming may not be necessarily global in nature like in the case of mobile phone, but more often perhaps down the street at your local coffee shop with a different ISP.

1. Maintenance, Overflow and Flash Crowd Handling (Outbound Federation) - No one really talks about this particular aspect of CDN Federation because well, it is not as exciting as global reach. But, it is probably the most useful part of Federation.  This is generally done as additional on-demand capacity and is under a different administrative control than the Federating Operator. This CDN capacity is deployed in the same Operator's network or in the network of an adjacent Operator in the same geographical vicinity as the home Operator CDN is deployed.  Sometimes this adjacent Operator can be a competitor.   This type of Federation is done in order to handle scheduled or unscheduled maintenance and flash crowds created by major events. 

2.  Traffic Management and Localization (Termination) - When you are an operator with subscribers, your subscribers can request content that originates anywhere in the world.  These subscribers also demand Quality of Experience (QoE) for the content they consume.  This content, when coming from another operator, typically enters at a very expensive and capacity limited point in the Operator's network and then traverses the home network with very limited Operator control until it gets to the subscriber.  This is expensive and hard to manage for the home Operator.  This also can cause a poor QoE for the subscriber.  Ultimately, nobody wins...

The solution to this challenge? Termination: bring the traffic deep into the Operator's network to cache it in strategic points, thus making it manageable.  It is hard not to sound biased here, but the Federation has to be with someone who has a lot of content and more importantly has control of this content for legal and technical reasons.

With that said, and now that we are talking about Federation in the same context, the next blog post I will talk about the Global Reach portion of Federation and reasons some of the challenges associated with it.

Michael Kuperman is a senior director of Business Development at Akamai    

Quick Wins with Website Protection Services

Securosis analyst Mike Rothman recently wrote a paper on the benefits of website protection services (WPS). I recommend you give it a read, as it's some of the most descriptive research I've seen on the subject.

Content in the report was developed independently of any sponsors and is based on material originally posted on the Securosis blogIt concludes that website protection services can add measurable security to your web presence in short order, for a reasonable price compared to deploying and managing one's own equipment and infrastructure.

From the summary:

As with any managed security service, WPS can offer a quick way to deploy protection without investing in significant infrastructure and hard-to-find application security skills. Of course there are trade-offs in flexibility and control when using any managed service, and every organization needs to balance those trade-offs when making build or buy decisions on key security initiatives. 


The paper explores those trade-offs and the best way to manage them, with guidance in such areas as website protection basics as well as deployment and ongoing management.

Check it out HERE.

Carder Gangs Continue Account Takeover Attempts

Akamai InfoSec continues to monitor repeated attempts to hijack the accounts of those doing business with our customers. In this attack, the bad guys reuse credentials they've stolen from other sites to fraudulently acquire merchandise.

Attackers use automated tools commonly referred to as account checkers to quickly determine valid user ID and password combinations across a large number of ecommerce sites. The tools help the attackers identify valid accounts quickly so they can gain access and acquire names, addresses and credit card data from user profiles.

--More on this and other security threats in Akamai's latest State of The Internet report, available for download HERE.

"We first started getting help requests last year from customers who noticed unusual activity," said Akamai CSIRT Director Michael Smith. "In March another customer reported strange activity."

Michael Kun, a security response engineer with Akamai's CSIRT team, said carder gangs acquire lists of user IDs and passwords from SQL injection and from online forums. They exploit users who are sloppy with their credentials, identifying those who use the same passwords for multiple commerce sites.

"They log in and use stolen credit cards to fraudulently buy, for example, a $200 gift card they can either sell for a profit or use themselves," Kun said. They also store cards in merchant shopping carts for future use.

--Please join us on Sept 26th at 11 AM ET for our next "Crush the Rush" holiday readiness webinar to learn more about how to protect your site and holiday season revenue. Mike Smith, director of our CSIRT Team, and Daniel Shugrue will be detailing the types of attack trends that Akamai is seeing and ways in which other customers have mitigated the latest threats. Click here for more details.

Red flags indicating an account checker has been used against an ecommerce site include the following:

• User complains that their account mailing address has been altered.
• Multiple other users altered in a similar time frame.
• Many failed logins detected in a short period of time from a small number of IP addresses.
• Locked accounts.
• Higher than normal rate of fraud activity.

Kun said many retailers have been affected. Fortunately, though, Akamai has prevented attackers from succeeding in attempts against its customers. "Every couple weeks we get a message from a customer who has seen strange behavior and wants to know if we've encountered this before.  We immediately recognize the activity and direct them to our advisory and set up their WAF configuration to block the activity," Smith said.

Companies can protect their customers in several ways. The use of a CAPTCHA or other validation steps requiring user intervention will defeat the authentication-checking tools.

Rate controls are particularly useful, specifically to count requests to the login page. Rate controls work by counting the number of requests from an individual IP address. "We scope down the rate control just to the login page and then we can set a threshold of 'if you send 10 login requests in 5 seconds, you're an automated login program not a human being behind a browser and we can safely block you,'" Smith said.

If the customer base is primarily from a known country or region, geoblocking may be an option to minimize the locations an attack can originate from.

Careful review of authentication logs can identify likely proxy servers being used by the attackers. Sequences of different logins from the same IP may be an indication.

In the end, the best defense is smarter user behavior. They can start by ending their habit of reusing user names and passwords. By using different credentials for every site, the attackers won't stand a chance.

Meanwhile, Akamai's User Validation Module (UVM) will confirm that the login is coming from a browser and will defeat these tools. Organizations that are on the Akamai platform and are using Kona Site Defender can readily block these kinds of attacks by using a combination of rate controls and IP blocklists. 

Akamai also recommends that ecommerce customers configure a bucket for the path to their login page.

The Render Chain and You

Many of us have used tools like Web Page Test or Y!Slow to test our sites.  These tools give us a slew of suggestions, and often tell you which performance optimizations carry the most weight, which are higher priority.  But why?  Why do you want to make certain optimizations, why are some higher priority than others, and how does it all fit together?

Akamai will be hosting the next Web Performance Meetup in the Cambridge office and our speakers, Matt Ringel, Enterprise Architect, and Joseph Morrissey, Senior Enterprise Architect, will be discussing these topics.  They will take a closer look at how the browser renders images on the page (the "render chain"), how it relates to a standard waterfall chart, what are the top front-end Web optimization rules and tools to measure Web page speed and share real-world examples and experiences in assessing and optimizing Web content.

The meetup will be held at the Akamai office on Tuesday, August 13 at 6:30pm.  Click here to learn more about the event and our speakers and to register for the event.  We hope to see you there.

Four News Reports On Recent DDoS Activity

Since one of Akamai InfoSec's biggest tasks is to blunt the impact of DDoS attacks against customers, I'm always scanning the various tech news outlets to see what's new and who among us is being quoted. Here are four that have caught my attention in recent days -- two of which include insight from Akamai CSIRT Director Michael Smith.

DDoS Attackers Change Their Game Plans
Smith is quoted in this article about how the firepower needed to launch an effective DDoS attack is steadily increasing. As a result, Tech News World's John P. Mello Jr. writes, attackers are tweaking their tactics to get "more bang for their bytes." From the article:

Logging pages at banking sites have been popular targets of application DDoS attacks. When you try to log into your bank, a whole set of backend functions are set in motion that consume CPU cycles at the site: Fraud prevention is activated; databases are accessed; authentication routines are run; and geolocations are reviewed. All those processes are performed whether a legitimate user or a fake persona is trying to log into the site. As an attacker, I would hit "that login page with a bunch of bogus usernames and passwords, knowing each request uses up a lot of resources of the target so I don't have to send as much volume of attack traffic as I would if I were trying to flood the network," Michael Smith, CSIRT director for Akamai Technologies, told TechNewsWorld. "The big trend over time will be smaller attacks with the impact of larger attacks -- smarter, more nimble, more agile attacks," he said.

DDoS: Phase 4 of Attacks Launched
Here, BankInfoSecurity reporter Tracy Kitten writes about how Izz ad-Din al-Qassam Cyber Fighters' fourth phase of DDoS attacks against U.S. banks kicked off July 31. Smith and other experts told Kitten that the attacks failed to take down the sites. From the article:

Mike Smith of the cybersecurity firm Akamai, which has been tracking and mitigating DDoS activity linked to al-Qassam, says DDoS defenses fared well throughout the morning of July 31, when the attacks began. And while the attack methods used were nothing new, some of the attack characteristics were, he says. "They keep pounding against one target," Smith said. "They've been hitting this one bank for about an hour and 15 minutes, now," which is unusual. But within a few hours, three more targets were hit, Smith says. Until now, al-Qassam typically hit a particular site for between 10 and 20 minutes at a time, Smith says. If the attacks are unsuccessful at taking a site down, the group moves on to another target, he adds.


How Do Booters Work? Inside a DDoS for Hire Attack
In this article, eWeek's Sean Michael Kerner explores the details of a talk Vigilant Chief Scientist Lance James gave at Black Hat last week. James talks about "Booter services" that offer paying customers DDoS attack capabilities on demand. From the article:

(James) got pulled into an investigation into the world of Booter services by his friend, security blogger Brian Krebs. Krebs had been the victim of a Booter service attack and was looking for some answers. "Basically a Booter is a Web-based service that does DDoS for hire at very low prices and is very hard to take down," James said. "They are marketed toward script kiddies, and many DDoS attacks that have been in the news have been done via these services." James was able to identify the suspected Booter site via Website log files and began to trace the activity of the individual who specifically attacked Krebs. Further investigation revealed that the same individual was also attacking other sites, including whitehouse.gov and the Ars Technica Website. After James was able to identify the Booter service and directly connect it to the attacks against Krebs, the two were able to help shut down the Booter service itself.

Shorter, higher-speed DDoS attacks on the rise, Arbor Networks says
Here, Network World reporter Ellen Messmer writes about how almost half of the DDoS attacks monitored in a threat system set up by Arbor Networks now reach speeds of over 1Gbps -- 13.5 percent from last year, while the portion of DDoS attacks over 10Gbps increased about 41 percent in the same period. From the article:

Arbor Networks monitoring system, which is based on anonymous traffic data from more than 270 service providers, saw in the second quarter of this year the more than doubling of the total number of attacks over 20Gbps that occurred in all of 2012. The only number that went down was the duration of all of these DDoS attacks, which now trend shorter, with 86% lasting less than one hour, according to the Arbor Networks trends report for the second quarter of 2013.

Note: This is the second blog post to our "Crush the Rush" holiday readiness webinar series.

 

We all know eCommerce is evolving.  It used to be pretty simple.  A shopper would visit your eCommerce Web application from her laptop or PC. You probably had to support one, maybe two browsers.  But the world has changed - quickly.  The fact is the proliferation of connected devices has changed the way we shop - whether it's couch commerce or show rooming - mobile devices have changed the game.

 

Yet it's not only mobile that has changed, the desktop/laptop environment has also evolved.  In 2008 the different versions of IE had close to 70% of the browser market share.  This is no longer the case with Chrome, Firefox and Safari growing significantly.  Looking only at the browser families hides a lot of complexity; IE7 and IE8 are not the same browser.  To get a more complete picture of browser development, check out Evolution Of The Web.

 

Mobile is growing fast.  That is no longer news.  We have all seen Mary Meeker's projections and eBay's mobile commerce retail volume numbers.  And let's remember that mobile is not only smartphones - it includes tablets - in fact, some would argue they are the future of mobile commerce.

 

Holiday 1.png

 

The fact that we no longer go online but are online has driven eCommerce growth.  According to the IBM Digital Analytics Benchmark 2012 US online sales for Black Friday increased ~21% over 2011 and Cyber Monday online sales grew by ~30% over 2011.  Online traffic trends over the years also show considerably bigger spike as more consumers look online for their holiday shopping.  This also means that the cost of failures or slowdowns under peak traffic conditions just keeps getting higher.

 

Yet delivering fast, scalable Web apps, that keep getting bigger and more complex, to constrained devices over constrained networks, is no simple feat.  It has gotten to the point where sites on the Gomez US Retail Website Performance Index require on average 30 hosts to deliver a home page.  So what happens if one of these third parties has an issue?  That depends on the architecture of the page and often it means a significant degradation of performance from an end-user perspective.

 

Holiday 2.png

 

In this example - measured using webpagetest and Pat Meenan's great SPOF-O-Matic Chrome extension - page load time is significantly impacted by the third party performance issue.  The kicker is that even though this isn't your fault directly, your customers will still likely hold you responsible for the degradation - and likely move on to the closest competitor - which is just a click away.

 

Compounding the complexity associated with Web app delivery are the ever-increasing end-user experience expectations.  We have talked about this at length in other posts.  If we don't meet those end-user expectations there are consequences.  Real User Monitoring (RUM) made it easy to correlate performance to business metrics such as conversion, bounce or abandonment rates.  Whether its data from vendors like Torbit or from companies like Walmart one thing is clear - the slower your pages the higher your abandonment and bounce rates and the lower your conversions.  In other words, Web performance impacts the business. As far back as 2006, Amazon noted that speed matters. In particular, "Every 100ms delay costs 1% of sales".

Bypassing Content Delivery Security

As is true of every year at Black Hat there are some talks that catch our attention. Talks range from the well thought out research papers to those of the narcissistic vulnerability pimps. This year was no exception. A talk entitled "Denying Service to DDoS Protection Services" by Allison Nixon is a presentation which fell into the well thought out column. This talk caught our attention for the obvious reason that we provide this as a service to our customers.

From Nixon's talk abstract:

Cloud based DDOS protection suffers from several fundamental flaws that will be demonstrated in this talk. This was originally discovered in the process of investigating malicious websites protected by Cloudflare- but the issue also affects a number of other cloud based services including other cloud based anti-DDOS and WAF providers.

You know what? Without hyperbole Nixon is absolutely correct. There are indeed issues with these types of services as we see highlighted in this article by Robert Westervelt. The flip side being that this is nothing new. The novel aspect in this case is that it has not really been openly discussed at length before now with a few exceptions such as the report from NCC Group. And kudos to Nixon for doing it. Some of the issues that were discussed were origin disclosure and configuration errors. There wasn't much thought given to compensating controls however.

The origin discovery issue is one that allows an attacker to bypass edge servers to access the origin systems. A key issue here lies with naming origin systems. Don't use easily guessable origin host names. This presents a problem wherein the attacker can guess the origin system DNS entry and simply bypass the controls. Attackers can leverage a host of tools to enumerate such as examining DNS for NS and MX records, guessing origin hostnames, network scanning and Shodan.

Next up is the use of pragma headers on pages served by a content distribution network vendor. This is a header that is added by the provider to provide a level of debugging where required. This can also be used by an attacker to design a DDoS attack. Some providers may even put origin system names in these headers. The upside here for Akamai customers is that they are not absolutely necessary for service operation and can be disabled is required.

What can be done for Akamai customers?

First off, non-standard names should be utilized in addition to having properly configured access control lists. This ACL's should include only Akamai system addresses so that non-authorized addresses can't query origin systems directly. Verbose error pages on systems can disclose far more information than a customer may intend which can inadvertently disclose origin systems.

How can Akamai help? We provide a service offering called SiteShield.

"SiteShield protects the origin by effectively removing it from the Internet-accessible IP address space, adding an additional layer of security protection while still ensuring that content is delivered quickly and without fail, regardless of end user location."

We can detect when an origin system is in trouble and then pull from a different origin hostname or even Cloud Storage. We can segment sites by ensuring that only the Akamai edge servers can query the origin systems. We can block access to origin systems. We've known about these issues since before 2002 and at that time we applied for and received a patent on the concept of website security.

By sheer virtue of the size and scope of Akamai's scale we can mitigate most threats to our customers at the edge.

It's a popular bit of Rock & Roll lore: The band Van Halen conducted a test to make sure its tour contracts were being read, placing in a line saying there were to be no brown M&Ms backstage. Not surprisingly, they found a couple browns and trashed their dressing room in response. 

The real story is a lot less dramatic. It wasn't about the band playing games with people. It was about making sure EVERYTHING in those contracts was being read. Frontman David Lee Roth describes it this way in his autobiography, "Crazy from the Heat":

Van Halen was the first band to take huge productions into tertiary, third-level markets. We'd pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors -- whether it was the girders couldn't support the weight, or the flooring would sink in, or the doors weren't big enough to move the gear through.The contract rider read like a version of the Chinese Yellow Pages because there was so much equipment, and so many human beings to make it function. So just as a little test, in the technical aspect of the rider, it would say "Article 148: There will be fifteen amperage voltage sockets at twenty-foot spaces, evenly, providing nineteen amperes . . ." This kind of thing. And article number 126, in the middle of nowhere, was: "There will be no brown M&M's in the backstage area, upon pain of forfeiture of the show, with full compensation."

So, when I would walk backstage, if I saw a brown M&M in that bowl . . . well, line-check the entire production. Guaranteed you're going to arrive at a technical error. They didn't read the contract. Guaranteed you'd run into a problem. Sometimes it would threaten to just destroy the whole show. Something like, literally, life-threatening.

In a video Q&A, Roth noted that the night he found the brown M&Ms, the stage had sunk into the venue's rubber floor because promotors failed to read the rider's specifications on stage weight distribution. He DID trash a dressing room with a food fight and flying feathers from a torn couch cushion, and the damage was about $200. The stage damage, meanwhile, amounted to nearly half a million dollars and could have injured or killed someone.
 

For those in security, there's a valuable lesson here. Large enterprises are constantly circulating thick stacks of to-do and not-to-do lists, directions on how to proceed, and so on. The smartest and most dedicated people are still human, prone to skimming a line here or a page there. But doing so can compromise an organization's physical and online security.

Akamai's InfoSec department has it's own little Brown M&M tests, which we use to keep ourselves in check and ensure we don't let serious mistakes happen.
My favorite example:
One of the security procedures mandates that employees lock their laptops any time they walk away from the desk. It's an easy rule to forget, especially if you have to run to the bathroom, or you see someone in the office you've been looking for, and rush over to catch a moment of their time. If we get caught forgetting that rule and leave the machines unlocked with the screen open for passers by to read, we have to buy a round of coffee for everyone. 

Akamai InfoSec Senior Program Manager Dan Abraham tells the story: "I got caught on my second day on the job.  My boss found my machine unlocked and sent me the 'coffee' message. I was mortified, but she gave me the best wake-up call to how seriously we take this rule.  I set up two shortcuts to quickly set the machine in locked mode."

When we get caught forgetting about our own rules and get penalized, you can bet we're a lot less likely to forget the next time.
It's all in good fun. No one's room gets trashed, and I get free coffee -- unless I'm the guy who gets caught with an unlocked screen.

A common set of security control objectives found in standard frameworks (ISO 27002, FedRAMP, et al) focus on environmental controls.  These controls, which might focus on humidity sensors and fire suppression, are designed to maximize the mean time between critical failure (MTBCF) of the systems inside a data center.  They are often about reliability, not safety[1]; fixating on over-engineering a small set of systems, rather than building in fault tolerance.

Is the cost worth the hassle? If you run one data center, then the costs might worthwhile - after all, it's only a few capital systems, and a few basis point improvements in MTBCF will likely be worth that hassle (both in operational false positives as well as deployment cost). But what if you operate in thousands of data centers, most of them someone else's?  The cost multiplies significantly, but the marginal benefit significantly decreases - as any given data center improvement only affects such a small portion of your systems.  Each data center in a planetary scale environment is now as critical to availability as a power strip is to a single data center location.  Mustering an argument to monitor every power strip would be challenging; a better approach is to have a drawer full of power strips, and replace ones that fail.

The same model applies at the planetary scale: with thousands of data centers all over the world (in most of which the operators already have other incentives to take care of environmental monitoring), a much more effective approach is to continue to focus on regional failover (data centers, metro regions, and countries go offline all the time), and only worry about issues within a data center when they become a noticeable problem. 

[1] Leveson, Nancy. Section 2.1, "Confusing Safety with Reliability", Engineering a Safer World, pp 7-14

 

crossposted at www.csoandy.com

Black Hat 2013: A Point-Counterpoint

An old friend and seasoned veteran of the security industry, Alan Shimel, was quick to pounce on my statement yesterday that there is nothing new happening in security; that we're simply trying to find more effective ways to deal with the same old problems.

Alan does make some valid points, especially the argument that there has been advancement on the technology side of things. I was speaking more to the messaging you see among vendors who often come to these shows preaching what they see as new trends that in reality are old challenges. I was also making the case that this stuff doesn't have to be new to be important.

Because I love a good point-counterpoint, I'm now sending you to Alan's post. Read and judge for yourselves:

To think otherwise is getting lost in the trees missing the forrest