Akamai Diversity

The Akamai Blog

Algorithms, Alerts, and Akamai Threat Intelligence

Let me start by posing a question: If in one week security solution A produces 120 alerts and security solution B produces 45 alerts, which solution is providing you with more effective protection? The answer is: It depends.

On the face of it, solution A appears to be more effective because it's delivering more alerts than solution B. But what if solution A is actually delivering a considerable number of alerts that don't represent a real security risk to the organization, or in other words, are false positive alerts?

Gone Phishing For The Holidays

Written by Or Katz and Amiram Cohen

Overview:

While our team, Akamai's Enterprise Threat Protector Security Research Team, monitored internet traffic throughout the 2017 holiday season, we spotted a wide-spread phishing campaign targeting users through an advertising tactic. During the six week timeframe, we tracked thirty different domains with the same prefix: "holidaybonus{.}com". Each one advertised the opportunity to win an expensive technology prize - a free iPhone 8, PlayStation 4, or Samsung Galaxy S8.

The websites associated with this phishing campaign used a combination of social engineering techniques such as creating trust (by using the reputation of well-known companies) and dismantling suspicion (through IP verification and social sharing). They lead users to willingly give away sensitive information by asking them to answer three trivia questions and submit their email address in order to win one of the offered prizes.

 

The Botconf Experience

By Yohai Einav, Amir Asiaee, Ali Fakiri-Tabrizi and Alexey Sarychev

Originally Posted on January 4, 2018

Earlier this month we took our show on the road, presenting some of our team's work at the Botconf conference in beautiful Montpellier, France. We could talk here for hours about the food, wine, culture, etc., but it would probably be more plausible for our readers to learn about the current developments in the war against bots first. So we'll start with that and perhaps get to the food discussion in the appendix.

 

2017 was a year of epic proportions that broke numerous retail records worldwide. The stage was set earlier in the year by Amazon's Prime Day, which became Amazon's biggest day ever by generating over $1 Billion in sales (60% increase YoY). This was followed by Singles' Day, during which Alibaba generated a record $25 billion in sales (a 39-percent increase over 2016). And then Black Friday and Cyber Monday joined the party by generating revenue of over $11.5 billion. Cyber Monday went on to become the largest online shopping day in US history.

A key trend that played out throughout the year was a significant shift in buying patterns; more consumers moved to shopping online and, as a result, there was a decrease in traffic at brick and mortar stores.  Among the three major platforms for online shopping (namely desktops, mobiles and tablets), mobile devices played a pivotal role during this year's holiday season. In fact, close to 40% of the Black Friday revenue was generated via mobile devices and smartphone revenue on Cyber Monday grew 32.2% from last year, reaching a new all-time high of $1.59B.

The week before Christmas was the last peak period for retailers in 2017 to appeal to consumers that were frantically looking to buy gifts for their loved ones. A recent NRF survey showed that only 12 percent of consumers had finished their holiday shopping as of December 12, with the average shopper having completed only 61 percent. As a result of this, there was a huge surge in traffic on our platform during the week before Christmas. Here are the key trends that we observed on our platform in 2017, one of the most successful years for retailers:

 

A Death Match of Domain Generation Algorithms

By Hongliang Liu and Yuriy Yuzifovich

Originally posted on December 29, 2017 

Today's post is all about DGA's (Domain Generation Algorithms): what they are, why they came into existence, what are some use cases where they are used, and, most importantly - how to detect and block them. As we will demonstrate here, the most effective defense against DGAs is a combination of traditional methods with modern machine intelligence.

Impact of Meltdown and Spectre on Akamai

Overview

On Wednesday, January 3rd, researchers from Google Project Zero, Cyberus Technology, Graz University of Technology, and other organizations released details about a pair of related vulnerabilities, dubbed Meltdown and Spectre.  These vulnerabilities appear to affect all modern processors and enables malicious code to read sensitive portions of memory on nearly all systems, including computers and mobile devices.  

Akamai is aware of side-effects of "speculative execution", the core capabilities that enable the Meltdown and Spectre vulnerabilities.   We are testing the performance and efficacy of the available patches on our systems.  Because of our technical approach to handling data of many customers, we do not believe these vulnerabilities pose a significant threat to the Akamai platform. Akamai does not rely on the capabilities that enable these vulnerabilities.  We will continue to update further, as more details become public.

Details

All modern CPU architectures use a technique called "speculative execution", including Intel, AMD, and ARM.  This technique takes advantage of times when the CPU is waiting for a slow process, such as reading or writing to main memory, to proactively perform tasks predicted from the current activities.  This speeds up overall processing by completing tasks before they're required, and if the task is not needed, the CPU unwinds the work and frees up the resources. Unfortunately, this process is not perfect, and the CPU can be tricked into giving access to read kernel memory.

 The vulnerability that speculative execution introduces leads to the paired vulnerabilities called Meltdown and Spectre.  Both vulnerabilities grant a user program read access to the kernel memory and to the memory space of other programs and hence all secrets they contain.  The impact of these vulnerabilities is especially concerning in the case of shared cloud services, as they can lead to escaping the memory space of the hypervisor to read other sections of virtual memory and potentially access secrets of other virtual hosts.

 The difference between Meltdown and Spectre is in the mechanism they use to read memory. Meltdown allows a user program to read any physical memory on the machine directly during speculative execution, leaving "tell-tale" effects that indicate what value has been read. With Spectre, a user program "tricks" the kernel into reading the memory itself during speculative execution and leaving "tell-tale" effects (that the user can see) that indicate what value has been read.

Because these vulnerabilities are at the hardware level, they affect almost all operating systems.  Patches for Meltdown are available for the most popular operating systems, with additional patches being released quickly. The Spectre vulnerability is not patchable at this time, and it is projected this will require new hardware to mitigate, meaning a new generation of CPU's.  The potential of patching software compilers to disable the exposed features that make Spectre possible exists, but it comes with significant costs.

 An additional concern with patching these vulnerabilities is that they cause a significant performance penalty on the CPU. This is a significant impact that many high use systems may not be able to absorb.

 Impact to Akamai

Akamai is in the process of evaluating the patches for these vulnerabilities.  Our desktop platforms--Macs, Windows, Linux--are as affected as anyone else's.  We're rolling out vendor patches and making suggested configuration changes as we receive them. Our production systems are not significantly impacted by it at this time.  There are two primary aspects of Akamai's environment that limit exposure to Meltdown and Spectre.  First, Akamai's platforms do not rely on CPU-enforced page table isolation for separation of customer data.  Second, the platforms do not allow for the execution of arbitrary code by customers or users, severely limiting any potential to exploit this weakness.  

Akamai believes there is minimal customer impact from these vulnerabilities, but we will continue to proactively evaluate this problem. Customer secrets and personally identifiable information are not exposed by this vulnerability. 

Details about the Meltdown and Spectre vulnerabilities are still evolving, and Akamai is continuing to research their impact on our systems and our customers.  

More details can be found in Intel's Newsroom https://newsroom.intel.com/.

 

Trusted access to Wordpress /wp-admin for content Authors

WordPress started as just a blogging system, but has evolved to be used as a full content management system, and so much more through the thousands of plugins, widgets, and themes. One of the main challenges I have seen with customers is to provide secure access to /wp-admin or /wp-login.php to content authors so that they can make the desired content changes. It seems straight forward, but the real challenge comes when you want to keep your published url https://website.com for your main organization's website and https://website.com/wp-admin or https://website.com/wp-login.php protected with authentication. 

Attack of the Killer ROBOT

On Dec 12th, 2017, researchers Hanno Böck, Juraj Somorovsky and Craig Young published a paper detailing an attack they called the Return Of Bleichenbacher's Oracle Threat (ROBOT)(https://eprint.iacr.org/2017/1189). This attack, as the name implies, is an extension of an attack published in 1998 (https://link.springer.com/content/pdf/10.1007%2FBFb0055716.pdf) that affects systems using certain implementations of RSA key exchange.

Customers have voiced concerns about this threat and asked how Akamai can help. Customers that use Akamai services are protected from this attack, because Akamai uses OpenSSL on all of our Edge servers, instead of the vulnerable implementation this threat targets. Since RSA key exchange is not used, this attack will fail against the Akamai Edge. An attacker communicates with an Edge server first, so the Akamai network prevents vulnerable origin servers from ever seeing the ROBOT attack. Additionally, customers who use Site Shield are protected from any related scanning and exploitation attempts as all requests will be forced through Akamai's Edge network.

There is one exception: Customers using the Akamai SRIP product should be aware the service proxies messages directly back to the customer's server and does not negotiate the key exchange.  The ROBOT attack traffic would also be proxied in this manner and could result in a successful attack.  Customers using SRIP need to patch vulnerable systems as quickly as their patching and risk mitigation processes allow.

The ROBOT attack works by allowing the attacker to to recover the plaintext from chosen ciphertext. In this scenario, the attacker queries the target server with an encrypted message. The server then decrypts the message and responds with 1 if the plaintext starts with 0x0002 or 0 otherwise. By modifying the messages sent, depending on the response from the server, the attacker can, over time, decrypt the ciphertext without obtaining the private key.. This attack is part of a family known as a chosen-ciphertext attacks.

In addition to the aforementioned exploit, this attack allows the attacker to sign arbitrary messages with the private RSA key of the server. Using a similar method, the attack treats the attacker's message as though it were eavesdropped ciphertext. Again the key is not stolen, but that attacker can still use it to sign messages.  The researchers point out that this function is time consuming and only works on certain types of implementations.

The most important lesson to be learned from this attack is that current testing is insufficient and allows old vulnerabilities to work against modern TLS implementations. The paper's authors note how alarming it is  they were able to successfully use a 19 year old attack with only simple modifications. The real solution is to fully depreciate RSA key exchange. While the current TLS 1.3 specification does so, legacy implementations and compatibility requirements will keep this attack and others a useful tool for years to come.  

 Akamai SIRT

Akamai, Mirai, & The FBI

Through the end of 2016, and throughout 2017, multiple Mirai-based botnets targeted multiple Akamai customers. The very first Mirai attack against Akamai was a multi-day barrage, weighing in at a peak of 620/Gbps that sent shockwaves across the Internet. The same botnet would go on to conduct several hard hitting attacks across the Internet and cause widespread outages. 

On December 13, 2017, the Department of Justice (DOJ) announced that multiple actors pled guilty to attacks linked to the original Mirai botnet. In this announcement they also listed Akamai and other organizations as a source of "additional assistance".

"Additional assistance was provided by the FBI's New Orleans and Pittsburgh Field Offices, the U.S. Attorney's Office for the Eastern District of Louisiana, the United Kingdom's National Crime Agency, the French General Directorate for Internal Security, the National Cyber-Forensics & Training Alliance, Palo Alto Networks Unit 42, Google, Cloudflare, Coinbase, Flashpoint, Yahoo and Akamai."

Researchers at Akamai have been involved in the dissection and tracking of the Mirai botnet from the very beginning and have been actively working to keep up with the evolution of Mirai and its many variants since. We want to use this opportunity to explain the role Akamai played in the research leading up to FBI's investigations.

In the hours following the initial attacks, researchers from Akamai SIRT, Flashpoint, CloudFlare, Google, Yahoo, Palo Alto Networks, and more, began to take notice and work toward understanding the who, what, why, and how that made attacks of this magnitude possible.  Individuals at these organizations formed an informal working group in order to share the knowledge they were gleaning on the nature of the new threat. 

Malware samples believed to be associated with a new, and mostly unknown, botnet were seen across several honeypots in the wild. This quickly-growing botnet was not only observed infecting honeypots, but was also identified based on its continually growing footprint of scanning and brute-forcing activities.

Researchers at Akamai began analyzing the malware to reverse engineer its network protocols and capabilities. The discoveries we made related to communication strategies, command and control protocol structures, attack capabilities, attack traffic signatures, as well as other valuable data was collected, documented, and ultimately shared to aid in collaboration across the working group of researchers and their respective organizations.

These findings and information proved valuable in helping other organizations defend against the Mirai botnet as well as assisting the FBI to understand, correlate, and attribute attacks back to specific botnets and suspected DDoS-for-hire operations.

We at Akamai appreciate the FBI and DOJ for acknowledging our hard work on the Mirai botnet research and their continued efforts to help victims and organizations to combat cybercrime.

Together we can all do our part to help make and keep the Internet "Fast, Reliable, and Secure".

High fives to everyone involved!

 

Akamaizing Your Dev & QA Environments

Over the last few months, I've been talking to many development and test teams who deliver their sites and applications through the Akamai Intelligent Platform. One common challenge they face is how to test their Akamai delivery configurations on the Internet against their private development and QA environments behind the firewall. Most operate on a DevOps model with the goal of performing end-to-end testing throughout the software development lifecycle in order to find bugs and interoperability issues (e.g. misconfigured headers) earlier in the development process. As noted by Ron Patton in "Testing Software", the cost of finding a bug increases logarithmically as the development process progresses, so finding these issues early on in the process saves a lot of time and money. The historical challenge these teams have faced has been how to allow the Akamai delivery configuration access to these development and QA environments. Typically private and not exposed to the internet, the common approach has required a move into the DMZ.