Get In Touch
June 2013 Archives
All websites connected to the public Internet receive bot traffic on a daily basis. A recent study shows that bots drive 16% of Internet traffic in the US, in Singapore this number reaches 56%. Should you be worried? Well, not necessarily. Not all bot traffic is bad, and some of it is even vital for the success of a web site. Web sites are also affected differently depending on the profile of the company, the value of its content and the popularity of the site.
What are the different types of bots?
- White bots (good) like search engines (Google, Bing and Baidu) help drive more customers to the site and therefore increase revenue. They also help monitor the site availability and performances (Akamai site analyzer, Keynote, Gomez) as well as pro-actively look for vulnerabilities (Whitehat, Qualys)
- Black bots (bad) send additional traffic to the site that may impact its availability and integrity. Bad bot traffic can drive customers away from the site, negatively impacts revenue and the web site reputation. For example Hackers trying to bring down a site with a DDoS attack or exposing / exploiting vulnerabilities. Competitors or other actors scrapping a site to harvest pricing information to be used for financial gains.
- Grey bots (neutral) don't necessarily help drive more customers to the site, nor do they specifically seem to cause any arm. Their identity and intent is more difficult to define, they usually present characteristics of a bot but are usually non aggressive. Such traffic would only occasionally cause problem due to a sudden increase of the request rate.
Identifying bot traffic
Now that we know how to find the bot traffic it is necessary to identify the different types of bots.
- White bot traffic is usually predictable. It will have a specific header signature and will come from IPs belonging to the companies managing the bot. It is possible to control what these bots can request on the site through robot.txt or through the administration interface of the service managing the bot activity.
- Black bots header signature will widely vary from exactly mimicking a genuine browser or search engine request to something that will present several anomalies with missing headers or atypical headers being present in the request. Black bots may also send requests at a higher rate.
- Grey bot traffic can be more challenging to identify since it generally present the same characteristics as black bots.
In order to effectively identify bot activity it is necessary to implement and deploy a set of rules to look at the traffic from different perspective. Several features of the Kona Site Defender product can help:
- The WAF application layer control feature consists of the mod_security core rules set and Akamai common rule set. Some of the rules are specifically designed to look for anomalies in the headers or look for know bot signature in the user-agent header value or combination of headers in the request.
- The rules mentioned above can be complemented with several WAF custom rules to help identify specific header signature.
- The WAF adaptive rate control feature can also be used to monitor excessive request rate from individual clients.
- Lastly the User Validation Module (UVM) can be used to perform client side validation during extreme situation when none of the "traditional" methods seem to help.
Mitigating bot traffic
Once bot traffic is identified, the next step is to decide what to do with the black and grey bot traffic. You may decide to just monitor the traffic over time and only take action should the activity become too aggressive and represent a threat for the stability of the web site. You may decide to take action as soon as the activity is identified, regardless of the volume of traffic generated. The type of action taken may vary depending on your business needs:
- Deny the traffic: this is the default but least elegant solution; client will receive a HTTP 4xx or 5xx response code. This will give the bot operator a clear indication that such action is not allowed on the site and that they've been identified by some security service or device. Bot operators could vary the format of the request and see if they can stay under the radar.
- Serve alternate content: the content served could vary from a generic "site unavailable" page to something that looks like a real response but only containing generic data. This strategy may slow down the bot operator and keep them in the dark as too why they cannot access to the data they want.
- Serve a cached / stale / static / version of the content: This is the best strategy of all but not always necessarily possible to implement, some content just cannot be cached or stored as static data on an alternate origin, because of compliance concerns or its dynamic nature. It could potentially take the bot operator some time to realize the data they are getting is worthless, an attacker running a DDoS against the site would also get discourage and move on to a different target.
David Senecal is senior enterprise architect at Akamai. Patrice Boffa is a director of global service delivery at Akamai.
The solution, he said, is to upgrade your browser to the latest version.On June 19th we uncovered, halted and contained a targeted attack on our internal network infrastructure. Our systems have been cleaned and there is no evidence of any user data being compromised. We are working with the relevant authorities to investigate its source and any potential further extent. We will let you know if there are any developments. It is possible that a few thousand Windows users, who were using Opera between 01.00 and 01.36 UTC on June 19th, may automatically have received and installed the malicious software. To be on the safe side, we will roll out a new version of Opera which will use a new code signing certificate.
Discovery Channel's nerve-wracking "Skywire Live with Nik Wallenda" last Sunday brought new meaning to the term, "cable TV." The live television and online broadcast featured tightrope walker Nik Wallenda, a.k.a. "The King of the Highwire," traversing a 1,400-wide section of the Grand Canyon some 1,500 feet in the air on a two-inch cable.
As pleased as Akamai was to help Discovery Channel deliver the live online video stream to viewers worldwide, we were far happier - and relieved - that Nik successfully completed the stunt without incident. Akamai worked with Discovery Channel to make streams from five different camera angles during the walk available to visitors to its Wired In multiplatform experience, which generated more than 2.1 million streams on Sunday and peaked at 322,000 concurrent streams.
Discovery Channel reported that the event was watched by 12.98 million total viewers and became the "#1 most social show across broadcast and cable in the U.S.," where it generated 1.3 million Tweets.
Chris Nicholson is a senior public relations manager at Akamai.
A DDoS attack targeted at one web site is bad enough. But what happens when that single attack poses the distinct possibility of doing even more damage than originally intended. The kind of collateral damage I'm talking about is very real when you take into account IT architectures reliant on shared services.
Shared services include anything that serves more than one application or set of users, for example:
- Network infrastructure
- Network bandwidth
- Market data and other sources of information.
- Domain name servers
And while shared services can benefit an organization by bringing down IT costs, creating resource efficiencies and shrinking the IT footprint, in the case of a DDoS attack, there can be significant disadvantages. An attack on an organization with a healthy amount of shared services has the capability to cause unforeseen outages across a wide number of applications, users, and geographies.
In this post I'll present three cases in which a DDoS attack impacted a shared service, knocking out applications far beyond the attack target. In each of these cases, the companies were not using Akamai to protect the systems under attack.
Online Attacks and Large Online Events
The upcoming Olympic Games, much like other widely publicized, international events, offer unique challenges for online security. In the course of any given year, Akamai supports many of these online events including concerts, sporting competitions, elections, and other newsworthy happenings. Because of this, we've had substantial visibility into the various ways the "bad guys" may try to take advantage of an online event for their own gain. As important, these events typically involve a variety of online components - from live streaming to commerce - that providing a significant amount of attack surfaces for the event's security staff to protect.
The primary concern when supporting a large event is that online resources may be built in a hurry and then receive a sudden influx of users. As such, there are time and effort constraints to securing these websites and the infrastructure that carries them. Usually as the security team for the event, you do not have a lot of historical Internet traffic to define what is "normal" so you have to rely on attack trends from other events and threat intelligence to detect any new techniques that specifically are targeting your event.
One thing you need to be prepared to defend against is Denial of Service (DoS) attacks, where the attacker disrupts the operation of an online service such as a livestream or website. Highly visible event websites are prime targets and a cleverly-conducted Distributed DoS attack looks like a flash mob of legitimate users that are coming to a website.
The high visibility for events such as the Olympics can also prompt defacement style attacks. Because the event draws a large volume of website users, hacktivist groups wishing to propagate their messages can alter the event's website to display their message to a broad audience and to generate headlines that create awareness for their cause.
In a similar vein, most large events have a scheduling site or a storefront where they sell tickets, memorabilia, or other services. These can be prime targets for data exfiltration for anything from email addresses to passwords to credit card information to VIP contact information.
Data breaches can also lead to inappropriate information disclosure. Although not a big fear for a real-time event such as the Olympics but for events with a predetermined outcome such as awards ceremonies, attackers can access the results before they are officially released - this can lead to significant audience loss and loss of revenue. The loss of revenue could happen as a result of actual content theft where attackers make a copy of the event content available on their own website or on portable media.
Significant interest in an event may make associated online assets a possible target for distributors of malware. In this situation, attackers would alter the website in a non-obvious, non-visible manner to serve hooks to malicious content that runs on the users' computer and installs other software such as viruses, keyloggers, and the Zeus banking trojan.
And unfortunately, the event organizers and their online assets are not always the sole target. Event audiences can also be targets. Vehicles could include phishing, spam, and malware email where attackers seek a wide variety of goals such as stealing information from the user's computer, implanting viruses on the user's computer, and conducting outright scams involving selling counterfeit tickets, VIP passes, and fraudulent "discount tickets" to unsuspecting consumers.
Overall, the trick to keeping online events as safe as possible is to understand your potential adversary based on previous trends and current capabilities and understand how they're most likely to attack, the motivation for the attack, and countermeasures that you can implement. Doing so will help you apply the right defenses to the right assets and have a successful event.
Read the full post HERE.
Put bluntly: to others, we're jerks.
If you don't think this is a problem, you can stop reading here.
The dysfunctional tale of Bob and Alice
Imagine this. Developer Bob just received an email from your Infosec department, subjectImportant Security Update. He sighs, thinking of the possibilities: a request to rotate his password, or a new rule? Maybe it's a dressing-down for having violated some policy, a demand for extra work to patch a system, or yet another hair-on-fire security update he doesn't really see the need for. His manager is on his case: he's been putting in long hours on the next rev of the backend but library incompatibilities and inconsistent APIs have ruined his week, and he's way behind schedule. He shelves the security update - he doesn't have time to deal with it, and most things coming out of Infosec are just sound and fury anyway - and, thinking how nice it would be if his team actually got the resources it needed, continues to code. He'll get to it later. Promise.
Meanwhile, you, Security Researcher Alice, are trying not to panic. You've seen the latest Rails vulnerability disclosure, and you know it's just a matter of hours before your exposed system gets hit. You remember what happened to Github and Heroku, and you're not anxious to make the front page of Hacker News (again?!). If only Bob would answer his email! You know he's at work - what's happening? The face of your boss the last time your software got exploited appears in your mind, and you cringe, dreading an unpleasant meeting ahead. You fume for several minutes, cursing all developers everywhere, but no response is forthcoming. Angrily, you stand up and march over to his cube, ready to give him a piece of your mind.
Pause. What's going on here, and what's about to happen?
Two weeks ago, a large number of eCommerce professionals converged in Chicago for IRCE 2013. I was one of the presenters to talk about "Building m-commerce: Different approaches, different outcomes."
There were plenty of instances throughout the conference where the growing mobile traffic and revenue numbers highlighted the importance of delivering fast, quality mobile experiences to consumers. As most of us know - fast, quality mobile experiences are better for business.
Yet as most of us also know, delivering fast, quality mobile experiences isn't exactly easy. The challenges associated with mobile performance are well documented (browsers, network, devices, etc.). For those who are interested in diving a little deeper into the topic, I recommend watching Ilya Grigorik's Google I/O talk - Mobile Performance from the Radio Up: Battery, Latency and Bandwidth Optimization.
The fact remains shoppers expect fast, quality mobile Web and app experiences and they generally don't know or care about the technological challenges associated with delivering them. In addition, as we start to think about the latest techniques to engage mobile users such as Responsive Web Design (RWD) these challenges become even greater. Responsive Web Design is a Web development approach that suggests Web pages should respond to the context in which they're loaded (primarily screen size) and change their user interface accordingly.
So what does delivering large, complex pages to mobile devices mean from an end-user's perspective? Below is a snapshot of the experience of an end-user visiting a US retailer's RWD site's home page on a variety of different devices/networks. The conclusion is obvious. The delivery of a relatively small 700KB site to a mobile device, over wireless networks, has resulted in serious performance shortcomings.
The first step to deliver fast, quality RWD sites is to focus on the actual page and the associated objects delivered to the end-user. As Web performance optimization guru Steve Souder likes to point out: "80-90 percent of end-user response time is spent on the frontend. Start there."
- Reducing the number of requests
- Reducing the number of bytes
- Accelerating rendering
For a more detailed view of how to actually reduce the number of requests & bytes and accelerate rendering download Akamai's Front-End Optimization primer.
Independent of your approach to engaging mobile users, it is always worth to remember the following:
- Deliver consistent, fast, quality web and application experience
- Adopt your customers' perspective
- Optimize for mobile first
Here in Akamai's InfoSec department, we constantly remind employees and customers to keep up on all the latest security patchesin their environment. Since Windows is everywhere in the business world, it's particularly important to keep an eye on Microsoft's patching efforts.
Today is an inflection point for Microsoft, as well as the security industry. For the first time ever, Microsoft is offering direct cash payouts in exchange for reporting certain types of vulnerabilities and exploitation techniques. We are making this shift in order to learn about these issues earlier and to increase the win-win between Microsoft's customers and the security researcher community.
Full details for the new bounty programs and a fantastic technical deep-dive by our esteemed panel of judges (headed by Matt Miller and David Ross) can be found on SRD's blog.
In short, we are offering cash payouts for the following programs:
- Mitigation Bypass Bounty - Microsoft will pay up to $100,000 USD for truly novel exploitation techniques against protections built into the latest version of our operating system (Windows 8.1 Preview). Learning about new exploitation techniques earlier helps Microsoft improve security by leaps, instead of one vulnerability at a time. This is an ongoing program and not tied to any event or contest.
- BlueHat Bonus for Defense - Microsoft will pay up to $50,000 USD for defensive ideas that accompany a qualifying Mitigation Bypass Bounty submission. Doing so highlights our continued support of defense and provides a way for the research community to help protect over a billion computer systems worldwide from vulnerabilities that may not have even been discovered.
- IE11 Preview Bug Bounty - Microsoft will pay up to $11,000 USD for critical vulnerabilities that affect IE 11 Preview on Windows 8.1 Preview. The entry period for this program will be the first 30 days of the IE 11 Preview period. Learning about critical vulnerabilities in IE as early as possible during the public preview will help Microsoft deliver the most secure version of IE to our customers.
L-R: David Seidman, Gerardo di Giacomo, Mark Oram (via avatar), Mike Reavey, Dustin Childs, Leah Lease, Rob Chapman, Neil Sikka, Jacqueline Lodwig, Brandon Caldwell, Katie Moussouris, Nate Jones, Sweety Chauhan, Emily Anderson, Claudette Hatcher, Cynthia Sandwick, Stephen Finnegan, Manuel Caballero, Ben Richeson, Elias Bachaalany, David Ross, Cristian Craioveanu, Ken Johnson, Mario Heiderich, Jonathan Ness. Not pictured: Christine Aguirre, Danielle Alyias, Michal Chmielewski, Chengyun Chu, Jules Cohen, Bruce Dang, Jessica Dash, Richard van Eeden, Michelle Gayral, Cristin Goodwin, Angela Gunn, Joe Gura, Dean Hachamovitch, Chris Hale, Kyle Henderson, Forbes Higman, Andrew Howard, Kostya Kortchinsky, Jane Liles, Matt Miller, William Peteroy, Georgeo Pulikkathara, Rob Roberts, Matt Thomlinson, David Wheeler, Chris Williams. Behind the camera: Jerry Bryant.
As previously posted here, Akamai's Frank Childs recently presented at the CDN Summit in NYC alongside Charter Communication's Kreig DuBose for a session titled "Deploying and Operator CDN to Enhance Customer Experience." Frank spoke about our Aura Network Solutions and Kreig explained his decision to select Aura, the results of the implementation and next steps. If you're interested in seeing the presentation, I've inluded the video below...
After the CDN Summit it was back on the road for the Cable Show in Washington, DC. The event was the perfect place to see new advances in interactive video applications, breakthrough technologies that are changing the way people communicate online, and multi-screen content delivery strategies, among others. And of course we heard from industry leaders talking about their investment and product development priorities for 2014 and beyond. While I spent some time celebrity spotting - MC Hammer, Ricky Schroder and JLo, to name just a few - our very own Kris Alexander presented in what is known as "Imagine Park" in the center of the exhibit floor. To a packed crowd, Kris showcased Akamai's "Hyperconnected Living Room Experience", explaining how the second screen trend is likely to evolve and what is possible in the world of synchronized experiences and companions apps.
Here is a video of his presentation.
As you might imagine, The Cable Show folks do a remarkable job of capturing all of the show's presentations online. Take a look at their web site to view more sessions at: https://2013.thecableshow.com
Tara Bartley is a Senior Product Marketing Manager at Akamai
Recently, DDoS attacks have spiked up well past 100 Gbps several times. A common move used by adversaries is the DNS reflection attack, a category of Distributed, Reflected Denial of Service (DRDos) attack. To understand how to defend against it, it helps to understand how it works.
How DNS works
At the heart of the Domain Name System are two categories of name server: the authoritative name server, which is responsible for providing authoritative answers to specific queries (like use5.akam.net, which is one of the authoritative name servers for the csoandy.com domain), and the recursive name server, which is responsible for answering any question asked by a client. Recursive name servers (located in ISPs, corporations, and data centers around the world) query the appropriate authoritative name servers around the Internet, and return an answer to the querying client. An open resolver is a category of resolver that will answer recursive queries from any client, not just those local to them. Because DNS requests are fairly small and lightweight, DNS primarily uses the Universal Datagram Protocol (UDP), a stateless messaging system. Since UDP requests can be sent in a single packet, the source address are easily forgeable with any address desired by the true sender.
DNS reflectionA DNS reflection attack takes advantage of three things: the forgeability of UDP source addresses, the availability of open resolvers, and the asymmetry of DNS requests and responses. To conduct an attack, an adversary sends a set of DNS queries to open resolvers, altering the source address on their requests to be those of their chosen target. The requests are designed to have much larger responses (often, using an ANY request, a 64 byte request yields a 512-byte response), thus resulting in the recursive name servers sending about 8 times as much traffic at the target as they themselves received. A DNS reflection attack can directly use authoritative name servers, but it requires more preparation and research, making requests specific to the scope of each DNS authority used.
Eliminating DNS reflection attacksAn ideal solution would obviously be to eliminate this type of attack, rather than every target needing to defend themselves. Unfortunately, that's challenging, as it requires significant changes by infrastructure providers across the Internet.
BCP38No discussion of defending against DRDoS style attacks is complete without a nod to BCP38. These attacks only work because an adversary, when sending forged packets, has no routers upstream filtering based on the source address. There is rare need to permit an ISP user to send packets claiming to originate in another ISP; if BCP38 were adopted and implemented in a widespread fashion, DRDoS would be eliminated as an adversarial capability. That's sadly unlikely, as BCP38 enters its 14th year; the complexity and edge cases are significant.
The open resolversWhile a few enterprises have made providing an open resolver into a business (OpenDNS, GoogleDNS), many open resolvers are either historical accidents, or resulting from incorrect configuration. Even MIT has turned off open recursion on its high-profile name servers.
Barring that, recursive name servers should implement rate limiting, especially on infrequent request types, to reduce the multiplication of traffic that adversaries can gain out of them.
Self-defenseUntil ISPs and resolver operators implement controls to limit how large attacks can become, attack targets must defend themselves. Sometimes, attacks are targeted at infrastructure (like routers and name servers), but most often they are being targeted at high-profile websites operated by financial services firms, government agencies, retail companies, or whoever has caught the eye of the attacker this week.
An operator of a high-profile web property can take steps to defend their front door. The first step, of course, should be to find their front door; and to understand what infrastructure it relies on. And then they can evaluate their defenses.
CapacityThe first line of defense is always capacity. Without enough bandwidth at the front of your defenses, nothing else matters. This needs to be measurable both in raw bandwidth, as well as in packets per second, because hardware often has much lower bandwidth capacity as packet sizes shrink. Unfortunately, robust capacity is now measurable in the 300+ gigabits per second, well beyond the resources of the average datacenter. However, attacks in the 3-10 gigabit per second range are still common, and well within the range of existing datacenter defenses.
FilteringFor systems that aren't DNS servers themselves, filtering out DNS traffic as far upstream as possible is a good solution, but certainly at a border firewall. One caveat - web servers often need to make DNS queries themselves, so ensure that they have a path to do so. In general, the principal of "filter out the unexpected" is a good filtering strategy.
DNS server protectionSince DNS servers have to process incoming requests (an authoritative name server has to respond to all of the recursive resolvers around the Internet, for instance), merely filtering DNS traffic upstream isn't an option. So what is perceived as a network problem by non-DNS servers becomes an application problem for the DNS server. Defenses may no longer be simple "block this" strategies; rather, defense can take advantage of application tools to provide different defenses.
RedundancyWhile the total number of authoritative DNS server IP addresses for a given domain is limited (while 13 should fit into the 512-byte DNS response packet, generally, 8 is a reasonable number), many systems use nowhere near the limit. Servers should be diversified, located in multiple networks and geographies, ensuring that attacks against two name servers aren't traveling across the same links.
AnycastSince requests come in via UDP, anycasting (the practice of having servers responding on the same IP address from multiple locations on the internet) is quite practical. Done at small scale (two to five locations), this can provide significant increases in capacity, as well as resilience to localized physical outages. However, DNS also lends itself to architectures with hundreds of name server locations sprinkled throughout the internet, each localized to only provide service to a small region of the Internet (possibly even to a single network). Adversaries outside these localities have no ability to target the sprinkled name servers, which continue to provide high quality support to nearby end users.
SegregationBased on Akamai's experience running popular authoritative name servers, 95% of all DNS traffic originates from under a million popular name server IP addresses (to get 99% requires just under 2 million IP addresses). Given that the total IPv4 address space is around 4.3 billion IP addresses, name servers can be segregated; a smaller number to handle the "unpopular" name servers, and a larger amount to handle the popular name servers. Attacks that reflect of unpopular open resolvers thus don't consume the application resources providing quality of service to the popular name service.
Response handlingAuthoritative name servers should primarily see requests, not responses. Therefore, they should be able to isolate, process, and discard response packets quickly, minimizing impact to resources engaged in replying to requests. This isolation can also apply to less frequent types of request, such that when a server is under attack, it can devote resources to requests that are more likely to provide value.
Rate LimitingTraffic from any name server should be monitored to see if it exceeds reasonable thresholds, and, if so, aggressively managed. If a name server typically sends a few requests per minute, having name servers not answer most requests from a name servers requesting dozens of time per second (these thresholds can and should be dynamic). This works because of the built in fault tolerance of DNS; if a requesting name server doesn't see a quick response, it will send another request, often to a different authoritative name server (and deprioritizing the failed name server for future requests).
As attacks grow past the current few hundred gigabit-per-second up to terabit-per-second attacks, robust architectures will be increasingly necessary to maintains a presence on the Internet in the face of adversarial action.
Last year's Click Frenzy online sale in Australia, modeled after the hugely successful Black Friday sale in the US, attracted a huge amount of media attention for all the wrong reasons. Within minutes of the sale going live on November 20 last year, the website buckled under the strain of unanticipated traffic volumes. A number of consumers, excited at the prospect of grabbing a great online bargain, were left empty handed and disappointed. The media's reaction was swift, and merciless.
Fast-forward six months and the latest Click Frenzy sale was completed without any technical issues at all. Click Frenzy needed an industry leading solution, and as such enlisted Akamai.
Following their initial sale and the subsequent technical issues they encountered, Click Frenzy wanted to understand how they could handle the sudden bursts in traffic volumes that characterizes their online business model. For Click Frenzy, it was absolutely imperative that their next big sale performed seamlessly to restore consumer and retailer confidence. Further technical issues had to be avoided at all costs.
We approached Click Frenzy and explained how Akamai's intelligent platform - specifically Akamai's AQUA & KONA solutions - enables traffic to be securely offloaded to distributed computing resources, thereby alleviating the burden on the their data centre. This model allows businesses, such as Click Frenzy, to manage traffic spikes that would be unheard of for most conventional online retailers.
Following Click Frenzy's 24-hour sale last month, the Akamai platform managed 169 million requests, with a peak of 29,722 requests per second. Total traffic volume exceeded 3TB. This article, which features an interview I did following the sale, provides a great overview of some of the positive outcomes Click Frenzy has enjoyed since its rollout of the Akamai intelligent platform.
Although the Click Frenzy site itself performed flawlessly, some technical issues were still evident on the part of some retailers involved in the promotion since their websites - not utilizing Akamai's platform - were unable to cope with the traffic being directed at them by Click Frenzy.
For retailers, having too many customers can be a nice problem to have, but customers can be unforgiving if they are running into the same issues time and again. This is particularly true for online where a customer can 'enter' another store with a simple click of a mouse.
A robust technology platform that enables customers to access the information they need and purchase in an efficient manner is absolutely essential in today's super-competitive online retail environment. And retailers don't want to be bogged down by technology. They just want it to work, so they can focus on their core business and what they know best - retailing. For our friends at Click Frenzy, that's exactly what they're doing now and we look forward to many more successful online 'frenzies', and maybe picking up a bargain or two ourselves along the way.
Ian Teague is a regional sales manager at Akamai.
One of the more challenging tasks as the new guy in Akamai's InfoSec department is getting to know George Penguin. He's our mascot and ambassador of good will. His likeness is everywhere in the office, most notably in the form of soft, stuffed toys that dominate the workspace like an invasion of the tribbles from "Star Trek."
As part of my new role as Akamai's security storyteller, I've been digging around in search of all the press coverage this group has gotten over the years. I'm finding that many articles and blog posts came from me, particularly what I wrote in my last job as managing editor of CSO Magazine.
You could say my coming here was destiny, based on how easily I focused on Akamai InfoSec research as a journalist. Most recently, I wrote about two presentations from SOURCE Boston 2013. One, by Senior Security Architect Eric Kobrin, was an analysis of the BroBot DDoS attacks that have targeted the banking sector.
The other talk, by researcher Christian Ternus, was about Akamai's Adversarial Resilience program. The goal: better protect Akamai's customers by thinking like those who attack them. "At Akamai the attack surface is huge," Ternus said. "As the bad guys attack our customers, we are constantly being tested to see if our systems are good enough. What's needed then is resilience -- the ability to adapt. Our job is to think and act like the adversary to make Akamai safer."
Looking further back, as a journalist I usually gravitated toward Akamai's InfoSec team for perspective and raw data on the biggest DDoS attacks and pretty much any story concerning cloud and application security.
There was this inside look at what it's like for Akamai to deal head-on with incoming DDoS attacks against customers.
And there was this report -- I didn't write it but did assign it -- throwing cold water on the notion that hacktivists were the chief culprits in the banking attacks.
Indeed, I've often come knocking when I wanted to measure the real impact of attacks against the hype I'd be seeing elsewhere in the media. The realities have often been less dramatic than reported.
Now that I've tossed my reporter's hat on the shelf to collect dust, expect a much deeper focus from me on the raw detail that comes out of a company that, at last check, handled tens of billions of daily Web interactions for 90 of the top 100 online U.S. retailers, 29 of the top 30 global media and entertainment companies, nine of the top 10 world banks, and all branches of the U.S. military.
This is going to be both fun and informative.
And it won't take long to ramp things up. In hindsight, I've been telling Akamai security stories all along.
A reminder to Akamai customers and the larger InfoSec community that Microsoft has released its security update for June. Below are the specific bulletins. Click the link on each to get full details.
You'll want to get a fix on which of these are most important to your organization and install them as soon as possible.
Cumulative Security Update for Internet Explorer (critical)
Vulnerability in Windows Kernel Could Allow Information Disclosure (important)
Vulnerability in Kernel-Mode Driver Could Allow Denial of Service (important)
Vulnerability in Windows Print Spooler Components (important)
Vulnerability in Microsoft Office (important)