As I've written before, we in Akamai InfoSec take our security training very seriously. We also know that our success as a security operation depends on the skills and talents of the future. So when I see great examples of training for younger generations, I'm compelled to mention it here. For this post, the subject is the HacKid Conference scheduled for April 19 and 20 at the San Jose Tech Museum of Innovation.
Get In Touch
Recently by Bill Brenner
Yesterday, we told you about how attackers were exploiting the Skipfish Web application vulnerability scanner to target financial sites. Since then, Akamai's CSIRT team has discovered that another scanner, Vega, is being exploited in the same manner.
Skipfish and Vega are automated web application vulnerability scanners available by free download. Skipfish is available at Google's code website and Vega is available from Subgraph. These are scanners intended for security professionals to evaluate the security profile of their own web sites. Skipfish was built and is maintained by independent developers and not Google. In addition to the code being hosted on Google's downloads site, Google's information security engineering team is mentioned in the Skipfish project's acknowledgements. Vega is a Java application that runs on Linux, OS X and Windows. The most recent release of Skipfish was December 2012 and Vega was August 2013.
Akamai's CSIRT team has discovered a series of attacks against the financial services industry. In this instance, the bad guys are exploiting the Skipfish Web application vulnerability scanner to probe company defenses.
Skipfish is available for free download at Google's code website. Security practitioners use it to scan their own sites for vulnerabilities. The tool was built and is maintained by independent developers and not Google, though Google's information security engineering team is mentioned in the project's acknowledgements.
In recent weeks, our CSIRT researchers have watched attackers using Skipfish for sinister purposes. CSIRT's Patrick Laverty explains it this way in an advisory available to customers through their services contacts:
Specifically, we have seen an increase in the number of attempts at Remote File Inclusion (RFI). An RFI vulnerability is created when a site accepts a URL from another domain and loads its contents within the site. This can happen when a site owner wants content from one site to be displayed in their own site, but doesn't validate which URL is allowed to load. If a malicious URL can be loaded into a site, an attacker can trick a user into believing they are using a valid and trusted site. The site visitor may then inadvertently give sensitive and personal information to the attacker. For more information on RFI, please see the Web Application Security Consortium and OWASP websites.
Akamai has seen Skipfish probes primarily targeting the financial industry. Requests appear to be coming from multiple, seemingly unrelated IP addresses. All of these IP addresses appear to be open proxies, used to mask the attacker's true IP address.
Skipfish will test for an RFI injection point by sending the string www.google.com/humans.txt or www.google.com/humans.txt%00 to the site's pages. It is a normal practice for sites to contain a humans.txt file, telling visitors about the people who created the site.
If an RFI attempt is successful, the content of the included page (in this instance, the quoted Google text above) will be displayed in the targeted website. The included string and the user-agent are both configurable by the attacker running Skipfish.
While the default user-agent for Skipfish version 2.10b is "Mozilla/5.0 SF/2.10b", we cannot depend on that value being set. It is easily editable to any value the Skipfish operator chooses.
Companies can see if they're vulnerable by using Kona Site Defender's Security Monitor to sort the stats by ARL and look for the presence of the aforementioned humans.txt file being included in the ARL to the site. Additionally, log entries will show the included string in the URL.
"We have seen three behaviors by Skipfish that can trigger WAF rule alerts," Laverty wrote. "The documentation for Skipfish claims it can submit up to 2,000 requests per second to a site."
Laverty said companies can blunt the threat by adjusting Summary and Burst rate control settings to detect this level of traffic and deny further requests. Also, a WAF rule can be created that would be triggered if the request were to contain the string "google.com/humans.txt".
There is no situation (other than on google.com) where this would be a valid request for a site, he said.
- We have a lot of information to share about attacks against Akamai customers and how the security team continues to successfully defend against them.
- We have to stay on top of all the latest threats and attack techniques so we can continue to be successful. Conferences are an important place to do that.
- Mobile malware is gonna be a big deal.
- Social networking will continue to be riddled with security holes and phishing attacks.
- Microsoft will release a lot of security patches.
- Data security breaches will continue to get more expensive
Examples of predictions that never had a hope of becoming true:
- Pen Testing will die
- IDS/IPS will die
- In February, we will officially launch the first-ever Akamai.com security section, and it'll be packed with everything you need to understand the threats your organization faces and how Akamai keeps its own security shop in order.
- Several of us from Akamai InfoSec will travel the globe, visiting customers and speaking at many a security conference. Those who attend will walk away enlightened and inspired.
- Akamai will continue to protect customers from DDoS and other attacks.
- You will see many new security videos and hear many new podcasts from us.
- If you visit the soon-to-be-launched Akamai security section, you will walk away with a better understanding of our compliance efforts than ever before.
Continuing our weekly series of security anthologies, we focus today on Akamai compliance procedures. We're currently in the midst of an ongoing series on how Akamai approaches it, but the following content presents the story thus far.
Four Things to Ask Before Seeking FedRAMP Certification
For a look at how we reached FedRAMP certification, I spoke with Akamai InfoSec's Kathryn Kun, the program manager who played a critical role in getting us certified.
Making Compliance Docs Public
To give customers better tools for self service, we're working to make compliance documentation public.
How Akamai InfoSec Answers Customer Compliance Questions
The process to address customer security and compliance questions used to be somewhat chaotic. Questions would float around in random emails and elsewhere, and which ones got answered was a luck of the draw. We found this unacceptable, and did something about it.
Everything You Want To Know About Akamai Security & Compliance
About our series on Akamai InfoSec compliance efforts.
Video: Security and Compliance 101
Chief Security Officer Andy Ellis gives a brief overview of security and compliance and what they mean to Akamai. Andy's overview includes common terms along with definitions and an overview of common standards and their components.
Akamai FedRAMP Compliance is Huge for Security
Why achieving Federal Risk and Authorization Management Program (FedRAMP) compliance as a cloud services provider was a major move for us.
Experiencing Compliance From The Inside Out
Bill Brenner's early lesson in how Akamai does compliance.
Lessons From Akamai InfoSec Training
How our compliance efforts shape the training of new employees.
We continue this week's series of anthologies with a collection of posts about security at planetary scale.
Each data center in a planetary scale environment is now as critical to availability as a power strip is to a single data center location. Mustering an argument to monitor every power strip would be challenging; a better approach is to have a drawer full of power strips, and replace ones that fail.
What the 2003 blackout taught us about security needs at planetary scale.
How Akamai keeps Internet traffic secure with redundancy across servers, server racks, data centers, cities, countries, and even continents.
This post focuses on another way we keep Internet traffic flowing smoothly in the face of attempted attacks: network and data mapping.
A look at how the world -- and our approach to security -- has changed in the decade since Blaster.
Indonesia replaces China as the top producer of attack traffic.
Operation Ababil has been a thorn in the side of financial institutions this past year, costing victims both business and sleep. At Akamai Edge, we talked a lot about the attacks -- particularly the lessons we've learned and the fresh security measures companies have put in place.
How attackers are able to use vulnerabilities in PHP applications to exploit superglobals -- pre-defined variables in PHP -- to launch malicious code.
How to squeeze the maximum usefulness out of bots and other Web crawlers.
A story in eWeek about "one of the largest attacks in the history of the Internet" describes a 9-hour barrage against an unnamed entity that swelled to 100 Gigabits of traffic at its peak. But does it really qualify as one of the biggest in Internet history?
A look at "watering-hole" attacks and what Akamai's CSIRT team has learned in tracking them.
The Syrian Electronic Army (SEA) -- a pro-Assad hacking group -- is making misery for some of the biggest entities on the Internet.
This post focuses on another way we keep Internet traffic flowing smoothly in the face of attempted attacks: network and data mapping.
Protecting customers from DDoS attacks is an Akamai InfoSec specialty. When we see DDoS attempts against our customers, the typical thinking is that someone is doing it to force sites into downtime, which can cost a business millions in lost online sales. But sometimes, these attacks are simply a cover operation to distract the victim while something else is going on.
InfoSec receives many questions from Akamai customers on a daily basis. A few months ago, someone asked if we had a case study on attack vectors against the 2012 London Olympics. The customer has a big event coming up, and wanted a picture of what they're up against -- and how they can defend against it all to keep their sites running smoothly. As it turned out, we did.
In recent weeks, Akamai's CSIRT team has seen the Web sites of multiple businesses redirected after being hijacked by a malicious user.
CSIRT's Patrick Laverty, who authored the advisory, said the intent of these hacks can include the redirection and capture of all company email to a rogue server, or to simply cause embarrassment to the company being affected.
The problem is that the malicious user is able to get administrative control of the account that allowed changes to be made to the DNS records for the company involved. Some of these companies believe the account access was obtained through a phishing attack against a person in the company who had the account credentials to make changes. In other situations, the attack was against the domain registrars themselves.
"Companies can protect themselves from this type of attack by locking their domain with the registrar," Laverty wrote. "There are two levels of locks that can and should be enabled. There is a lock between the owner of the site and the domain registrar and there is a lock between the registrar and ICANN. To be truly safe, both levels of locks should be put in place."
Are you affected?
You know you are affected if your domain no longer shows the Web site you expect. If all pages under a domain either return the attacker's pages or 404 messages, you may be affected by this type of attack.
The most certain way to determine if you're affected is to check your domain's DNS records. This can be done with a simple "whois" lookup or by logging in to the site's DNS registry and check the values. Be aware, it is also possible the attacker will have changed the password into the registrar's account.
Laverty outlined a two-part solution.
First is to properly educate the people possessing the password that can update DNS records with the registrar. Many times in these attacks, the username and password were successfully phished away from someone with that level of credentials. If the credentials can be phished away, the second part of the protection won't help.
The second part is to have domain locks in place. Domains can have locks at both the registry and registrar levels. The site owner can set and control registrar locks. These will prevent any other registrar from being able to successfully request a change to DNS for a domain. The locks that can be set at the registrar level by the site owner are:
The clientDeleteProhibited will prevent a registrar from deleting the domain records without the owner first unlocking the site. With the clientUpdateProhibited lock set, the registrar may not make updates to the domain and with the clientTransferProhibited set, the registrar may not allow the domain to be transferred to another registrar. The only exception to these is when the domain registration period has expired. These locks can be set and unset by the site owner and many registrars will allow these locks at no cost.
A second level of locks can also be put in place and these are set at the registry level. These are controlled by the registry and setting these can incur a cost to the domain owner. These locks are: