Akamai Diversity
Home > Bill Brenner

Recently by Bill Brenner

HacKid Conference: Security Training for Kids

As I've written before, we in Akamai InfoSec take our security training very seriously. We also know that our success as a security operation depends on the skills and talents of the future. So when I see great examples of training for younger generations, I'm compelled to mention it here. For this post, the subject is the HacKid Conference scheduled for April 19 and 20 at the San Jose Tech Museum of Innovation.

Like Skipfish, Vega is Used to Target Financial Sites

Yesterday, we told you about how attackers were exploiting the Skipfish Web application vulnerability scanner to target financial sites. Since then, Akamai's CSIRT team has discovered that another scanner, Vega, is being exploited in the same manner.

Skipfish and Vega are automated web application vulnerability scanners available by free download. Skipfish is available at Google's code website and Vega is available from Subgraph. These are scanners intended for security professionals to evaluate the security profile of their own web sites. Skipfish was built and is maintained by independent developers and not Google. In addition to the code being hosted on Google's downloads site, Google's information security engineering team is mentioned in the Skipfish project's acknowledgements. Vega is a Java application that runs on Linux, OS X and Windows. The most recent release of Skipfish was December 2012 and Vega was August 2013.


Attackers Use Skipfish to Target Financial Sites

Akamai's CSIRT team has discovered a series of attacks against the financial services industry. In this instance, the bad guys are exploiting the Skipfish Web application vulnerability scanner to probe company defenses.

Skipfish is available for free download at Google's code website. Security practitioners use it to scan their own sites for vulnerabilities. The tool was built and is maintained by independent developers and not Google, though Google's information security engineering team is mentioned in the project's acknowledgements.

In recent weeks, our CSIRT researchers have watched attackers using Skipfish for sinister purposes. CSIRT's Patrick Laverty explains it this way in an advisory available to customers through their services contacts:

Specifically, we have seen an increase in the number of attempts at Remote File Inclusion (RFI). An RFI vulnerability is created when a site accepts a URL from another domain and loads its contents within the site. This can happen when a site owner wants content from one site to be displayed in their own site, but doesn't validate which URL is allowed to load. If a malicious URL can be loaded into a site, an attacker can trick a user into believing they are using a valid and trusted site. The site visitor may then inadvertently give sensitive and personal information to the attacker. For more information on RFI, please see the Web Application Security Consortium and OWASP websites.

Akamai has seen Skipfish probes primarily targeting the financial industry. Requests appear to be coming from multiple, seemingly unrelated IP addresses. All of these IP addresses appear to be open proxies, used to mask the attacker's true IP address. 

Skipfish will test for an RFI injection point by sending the string www.google.com/humans.txt or www.google.com/humans.txt%00 to the site's pages. It is a normal practice for sites to contain a humans.txt file, telling visitors about the people who created the site.

If an RFI attempt is successful, the content of the included page (in this instance, the quoted Google text above) will be displayed in the targeted website. The included string and the user-agent are both configurable by the attacker running Skipfish. 

While the default user-agent for Skipfish version 2.10b is "Mozilla/5.0 SF/2.10b", we cannot depend on that value being set. It is easily editable to any value the Skipfish operator chooses.

Companies can see if they're vulnerable by using Kona Site Defender's Security Monitor to sort the stats by ARL and look for the presence of the aforementioned humans.txt file being included in the ARL to the site. Additionally, log entries will show the included string in the URL.

"We have seen three behaviors by Skipfish that can trigger WAF rule alerts," Laverty wrote. "The documentation for Skipfish claims it can submit up to 2,000 requests per second to a site."

Laverty said companies can blunt the threat by adjusting Summary and Burst rate control settings to detect this level of traffic and deny further requests. Also, a WAF rule can be created that would be triggered if the request were to contain the string "google.com/humans.txt". 

There is no situation (other than on google.com) where this would be a valid request for a site, he said. 

skipfish.jpg

Why I'm Attending ShmooCon 2014

Here at Akamai, we're busy preparing for RSA Conference 2014. It's the biggest security conference of the year, and we send a platoon of employees every time. Given our role in securing the Internet, it's a no-brainer.

But there are many other conferences we attend each year, because:

  1. We have a lot of information to share about attacks against Akamai customers and how the security team continues to successfully defend against them.
  2. We have to stay on top of all the latest threats and attack techniques so we can continue to be successful. Conferences are an important place to do that.
Next week, I'm attending one of the lesser-known conferences: ShmooCon 2014 in Washington DC. In recent years, I've found some of the best content at this event, and I've learned a lot. It's also an excellent place to meet other security practitioners that can become important allies. Some of the most important contacts I've made were at ShmooCon.

The unfamiliar usually chuckle or cock their heads in puzzlement when I tell them about ShmooCon. The name throws them off, and it's not a traditional business conference. ShmooCon is organized by the Shmoo Group, a security think tank started by Bruce Potter in the late 1990s. Attendees represent the full cross section of the security industry. There are hackers, CSOs, government security types and everything in between. More than a few people have compared it to the Black Hat conferences of old or a smaller version of Defcon.

The event has inspired a lot of thinking outside the box -- not just in terms of the talks, but in how attendees travel and network. In recent years people have carpooled to ShmooCon. For three years in a row I traveled to and from the event in what we called the Shmoobus -- An RV crammed with hackers making the journey from Boston to Washington DC. Those 12-hour drives made for a lot of bonding. With such a long trek, there's time to delve into deep discussions about the challenges of our jobs.

The Shmoobus is no more, unfortunately. But what I learned about security on those journeys will last a lifetime.

For more information about ShmooCon, visit the website. The full agenda is posted, including one of my favorite parts of the event, Friday-night "fire talks" -- 15-minute presentations where speakers are challenged to dive right into the core of their content.

I'll write about the talks and other ShmooCon events from this blog.

shmoocon_0.png

Security Predictions? Here Are Some Facts About 2014

I've said it before and will repeat it here: I absolutely loathe security predictions.

I have nothing against those who make them. It's just that most predictions are always so much duh. The rest are marketing creations that have no attachment to reality. 

Examples of the self evident:

  • Mobile malware is gonna be a big deal.
  • Social networking will continue to be riddled with security holes and phishing attacks.
  • Microsoft will release a lot of security patches.
  • Data security breaches will continue to get more expensive

Examples of predictions that never had a hope of becoming true:

I'm going to offer you something different: Some facts for 2014. That's right, things that are really going to happen -- things that are not obvious to those outside of Akamai. Let's begin:

  1. In February, we will officially launch the first-ever Akamai.com security section, and it'll be packed with everything you need to understand the threats your organization faces and how Akamai keeps its own security shop in order.
  2. Several of us from Akamai InfoSec will travel the globe, visiting customers and speaking at many a security conference. Those who attend will walk away enlightened and inspired.
  3. Akamai will continue to protect customers from DDoS and other attacks.
  4. You will see many new security videos and hear many new podcasts from us.
  5. If you visit the soon-to-be-launched Akamai security section, you will walk away with a better understanding of our compliance efforts than ever before.
Happy New Year! May you have a healthy, prosperous and secure 2014.

2014.jpg

Akamai Security Compliance: The Story So Far

Continuing our weekly series of security anthologies, we focus today on Akamai compliance procedures. We're currently in the midst of an ongoing series on how Akamai approaches it, but the following content presents the story thus far.

Four Things to Ask Before Seeking FedRAMP Certification
For a look at how we reached FedRAMP certification, I spoke with Akamai InfoSec's Kathryn Kun, the program manager who played a critical role in getting us certified.

Making Compliance Docs Public
To give customers better tools for self service, we're working to make compliance documentation public.

How Akamai InfoSec Answers Customer Compliance Questions
The process to address customer security and compliance questions used to be somewhat chaotic. Questions would float around in random emails and elsewhere, and which ones got answered was a luck of the draw. We found this unacceptable, and did something about it.

Everything You Want To Know About Akamai Security & Compliance
About our series on Akamai InfoSec compliance efforts.

Video: Security and Compliance 101
Chief Security Officer Andy Ellis gives a brief overview of security and compliance and what they mean to Akamai. Andy's overview includes common terms along with definitions and an overview of common standards and their components.

Akamai FedRAMP Compliance is Huge for Security
Why achieving Federal Risk and Authorization Management Program (FedRAMP) compliance as a cloud services provider was a major move for us.

Experiencing Compliance From The Inside Out
Bill Brenner's early lesson in how Akamai does compliance.

Lessons From Akamai InfoSec Training
How our compliance efforts shape the training of new employees.

Thumbnail image for Medical-Billing-Compliance-Checklist.jpg


Security at Planetary Scale: An Anthology

We continue this week's series of anthologies with a collection of posts about security at planetary scale.


Environmental Controls at Planetary Scale

Each data center in a planetary scale environment is now as critical to availability as a power strip is to a single data center location.  Mustering an argument to monitor every power strip would be challenging; a better approach is to have a drawer full of power strips, and replace ones that fail.


2003 Blackout: An Early Lesson in Planetary Scale?

What the 2003 blackout taught us about security needs at planetary scale.


The Power Of Redundancy

How Akamai keeps Internet traffic secure with redundancy across servers, server racks, data centers, cities, countries, and even continents.


Mapping Networks and Data: Safety in Numbers

This post focuses on another way we keep Internet traffic flowing smoothly in the face of attempted attacks: network and data mapping.


Ten Years After the Blaster Worm

A look at how the world -- and our approach to security -- has changed in the decade since Blaster.


url.jpg

Attack Techniques and Defenses: An Anthology

Akamai's security team defends customers from a variety of threats 24 hours a day, seven days a week. You name it: DDoS attacks, DNS-related attacks, vulnerability exploitation -- we've seen it all.

What follows is a collection of posts focusing on attack techniques and the defenses we have deployed and/or suggested.

Indonesian Attack Traffic Tops List; Port 445 No Longer Main Target

Indonesia replaces China as the top producer of attack traffic.

Dissecting Operation Ababil at Akamai Edge

Operation Ababil has been a thorn in the side of financial institutions this past year, costing victims both business and sleep. At Akamai Edge, we talked a lot about the attacks -- particularly the lessons we've learned and the fresh security measures companies have put in place.

Manipulating PHP Superglobal Variables

How attackers are able to use vulnerabilities in PHP applications to exploit superglobals -- pre-defined variables in PHP -- to launch malicious code.

Bots, Crawlers Not Created Equally

How to squeeze the maximum usefulness out of bots and other Web crawlers.

Was This Really One of the Internet's Biggest Attacks?

story in eWeek about "one of the largest attacks in the history of the Internet" describes a 9-hour barrage against an unnamed entity that swelled to 100 Gigabits of traffic at its peak. But does it really qualify as one of the biggest in Internet history?

Defending Against Watering-Hole Attacks

A look at "watering-hole" attacks and what Akamai's CSIRT team has learned in tracking them.

SEA Attacks Illustrate Need for Better DNS Security

The Syrian Electronic Army (SEA) -- a pro-Assad hacking group -- is making misery for some of the biggest entities on the Internet.

Mapping Networks and Data: Safety in Numbers

This post focuses on another way we keep Internet traffic flowing smoothly in the face of attempted attacks: network and data mapping.

DDoS Attacks Used As Cover For Other Crimes

Protecting customers from DDoS attacks is an Akamai InfoSec specialty. When we see DDoS attempts against our customers, the typical thinking is that someone is doing it to force sites into downtime, which can cost a business millions in lost online sales. But sometimes, these attacks are simply a cover operation to distract the victim while something else is going on. 

Blunting Attacks During Olympic-sized Events

InfoSec receives many questions from Akamai customers on a daily basis. A few months ago, someone asked if we had a case study on attack vectors against the 2012 London Olympics. The customer has a big event coming up, and wanted a picture of what they're up against -- and how they can defend against it all to keep their sites running smoothly. As it turned out, we did.

As is true of every year at Black Hat there are some talks that catch our attention. Talks range from the well thought out research papers to those of the narcissistic vulnerability pimps. This year was no exception. A talk entitled "Denying Service to DDoS Protection Services" by Allison Nixon is a presentation which fell into the well thought out column. This talk caught our attention for the obvious reason that we provide this as a service to our customers.

Thumbnail image for cyber-attack.jpg


Akamai CSIRT Warns of DNS Record Hijacking

In recent weeks, Akamai's CSIRT team has seen the Web sites of multiple businesses redirected after being hijacked by a malicious user.

CSIRT's Patrick Laverty, who authored the advisory, said the intent of these hacks can include the redirection and capture of all company email to a rogue server, or to simply cause embarrassment to the company being affected.

The problem is that the malicious user is able to get administrative control of the account that allowed changes to be made to the DNS records for the company involved. Some of these companies believe the account access was obtained through a phishing attack against a person in the company who had the account credentials to make changes. In other situations, the attack was against the domain registrars themselves.

"Companies can protect themselves from this type of attack by locking their domain with the registrar," Laverty wrote. "There are two levels of locks that can and should be enabled. There is a lock between the owner of the site and the domain registrar and there is a lock between the registrar and ICANN. To be truly safe, both levels of locks should be put in place."

Are you affected? 

You know you are affected if your domain no longer shows the Web site you expect. If all pages under a domain either return the attacker's pages or 404 messages, you may be affected by this type of attack. 

The most certain way to determine if you're affected is to check your domain's DNS records. This can be done with a simple "whois" lookup or by logging in to the site's DNS registry and check the values. Be aware, it is also possible the attacker will have changed the password into the registrar's account.

Suggested fixes

Laverty outlined a two-part solution.

First is to properly educate the people possessing the password that can update DNS records with the registrar. Many times in these attacks, the username and password were successfully phished away from someone with that level of credentials. If the credentials can be phished away, the second part of the protection won't help.

The second part is to have domain locks in place. Domains can have locks at both the registry and registrar levels. The site owner can set and control registrar locks. These will prevent any other registrar from being able to successfully request a change to DNS for a domain. The locks that can be set at the registrar level by the site owner are:

• clientDeleteProhibited
• clientUpdateProhibited
• clientTransferProhibited

The clientDeleteProhibited will prevent a registrar from deleting the domain records without the owner first unlocking the site. With the clientUpdateProhibited lock set, the registrar may not make updates to the domain and with the clientTransferProhibited set, the registrar may not allow the domain to be transferred to another registrar. The only exception to these is when the domain registration period has expired. These locks can be set and unset by the site owner and many registrars will allow these locks at no cost.

A second level of locks can also be put in place and these are set at the registry level. These are controlled by the registry and setting these can incur a cost to the domain owner. These locks are:

• serverDeleteProhibited
• serverUpdateProhibited
• serverTransferProhibited

How Origin Offload Improves Patch Management

I frequently write about patching updates, believing its important for customers and the wider business world to keep their machines as updated as possible. But until now, I've never written about the direct role Akamai plays in smoothing the patch management process along.

This is a post about origin offload and how it keeps the patch downloading sites of various companies from getting crushed beneath the weight of heavy demand when the fix arrives.

First, an observation: The normal traffic pattern for a patch site is very small during most days of the month. But there's a massive spike of activity when a patch or update is first released.  Everybody tries to download patches at the same time. For a software vendor without Akamai, this means that in order to support a worldwide patch rollout, they need massive amounts of web server infrastructure. That's impractical to say the least, since most of that infrastructure wouldn't be used most of the time.

To better explain our role, I went to Akamai CSIRT Director Michael Smith, who started with a banking analogy. He noted that in the days before direct deposit and ATM machines, your average bank would be snarled by car and foot traffic when people went to withdraw cash on payday. Direct deposit and ATMs all but eliminated that phenomenon by spreading around the resources by which people could get their money.

Direct deposit and ATMs, he said, are forms of origin offload. The bank is the origin, and by offloading that traffic among resources distributed around the world and across the Internet, traffic jams are mostly eliminated.

In the case of patch management, the software vendor's web server is the origin. Instead of a bank dispensing cash, the given company dispenses patches. 

"We sit in between a website's users and our customers' web servers. When the user makes a request for the patch, they send those requests first to our servers." he says. "When a person requests a patch, they're going to us. Instead of everyone jamming the main supplier's site for patches, Akamai helps distribute the load for them. We deliver content from the edge, where our servers are deployed inside the user's ISP, which means fewer requests directly to the patch provider's site."

Though we typically think of origin offload -- and for OS patches and anti-virus updates we will see up to 99 percent origin offload -- as a tool for our customers to save on bandwidth, server licenses and hardware, there's also a security component.  

The less traffic that goes directly to an origin, the less there is to monitor. There's less traffic to inspect with IDS, fewer firewall and application logs to sift through and less data being held in a SIEM. 

More importantly, we only send requests to the origin that are for dynamically-generated pages specific to the user -- exactly the kind of traffic that is security-relevant and that you want to inspect.  

Not only do you save money on infrastructure at the origin, but it also greatly increases the signal-to-noise ratio of any kind of security monitoring that you are doing.



MiddleMan-300x168.jpg