WordPress started as just a blogging system, but has evolved to be used as a full content management system, and so much more through the thousands of plugins, widgets, and themes. One of the main challenges I have seen with customers is to provide secure access to /wp-admin or /wp-login.php to content authors so that they can make the desired content changes. It seems straight forward, but the real challenge comes when you want to keep your published url https://website.com for your main organization's website and https://website.com/wp-admin or https://website.com/wp-login.php protected with authentication.
Get In Touch
December 2017 Archives
On Dec 12th, 2017, researchers Hanno Böck, Juraj Somorovsky and Craig Young published a paper detailing an attack they called the Return Of Bleichenbacher's Oracle Threat (ROBOT)(https://eprint.iacr.org/2017/1189). This attack, as the name implies, is an extension of an attack published in 1998 (https://link.springer.com/content/pdf/10.1007%2FBFb0055716.pdf) that affects systems using certain implementations of RSA key exchange.
Customers have voiced concerns about this threat and asked how Akamai can help. Customers that use Akamai services are protected from this attack, because Akamai uses OpenSSL on all of our Edge servers, instead of the vulnerable implementation this threat targets. Since RSA key exchange is not used, this attack will fail against the Akamai Edge. An attacker communicates with an Edge server first, so the Akamai network prevents vulnerable origin servers from ever seeing the ROBOT attack. Additionally, customers who use Site Shield are protected from any related scanning and exploitation attempts as all requests will be forced through Akamai's Edge network.
There is one exception: Customers using the Akamai SRIP product should be aware the service proxies messages directly back to the customer's server and does not negotiate the key exchange. The ROBOT attack traffic would also be proxied in this manner and could result in a successful attack. Customers using SRIP need to patch vulnerable systems as quickly as their patching and risk mitigation processes allow.
The ROBOT attack works by allowing the attacker to to recover the plaintext from chosen ciphertext. In this scenario, the attacker queries the target server with an encrypted message. The server then decrypts the message and responds with 1 if the plaintext starts with 0x0002 or 0 otherwise. By modifying the messages sent, depending on the response from the server, the attacker can, over time, decrypt the ciphertext without obtaining the private key.. This attack is part of a family known as a chosen-ciphertext attacks.
In addition to the aforementioned exploit, this attack allows the attacker to sign arbitrary messages with the private RSA key of the server. Using a similar method, the attack treats the attacker's message as though it were eavesdropped ciphertext. Again the key is not stolen, but that attacker can still use it to sign messages. The researchers point out that this function is time consuming and only works on certain types of implementations.
The most important lesson to be learned from this attack is that current testing is insufficient and allows old vulnerabilities to work against modern TLS implementations. The paper's authors note how alarming it is they were able to successfully use a 19 year old attack with only simple modifications. The real solution is to fully depreciate RSA key exchange. While the current TLS 1.3 specification does so, legacy implementations and compatibility requirements will keep this attack and others a useful tool for years to come.
Through the end of 2016, and throughout 2017, multiple Mirai-based botnets targeted multiple Akamai customers. The very first Mirai attack against Akamai was a multi-day barrage, weighing in at a peak of 620/Gbps that sent shockwaves across the Internet. The same botnet would go on to conduct several hard hitting attacks across the Internet and cause widespread outages.
On December 13, 2017, the Department of Justice (DOJ) announced that multiple actors pled guilty to attacks linked to the original Mirai botnet. In this announcement they also listed Akamai and other organizations as a source of "additional assistance".
"Additional assistance was provided by the FBI's New Orleans and Pittsburgh Field Offices, the U.S. Attorney's Office for the Eastern District of Louisiana, the United Kingdom's National Crime Agency, the French General Directorate for Internal Security, the National Cyber-Forensics & Training Alliance, Palo Alto Networks Unit 42, Google, Cloudflare, Coinbase, Flashpoint, Yahoo and Akamai."
Researchers at Akamai have been involved in the dissection and tracking of the Mirai botnet from the very beginning and have been actively working to keep up with the evolution of Mirai and its many variants since. We want to use this opportunity to explain the role Akamai played in the research leading up to FBI's investigations.
In the hours following the initial attacks, researchers from Akamai SIRT, Flashpoint, CloudFlare, Google, Yahoo, Palo Alto Networks, and more, began to take notice and work toward understanding the who, what, why, and how that made attacks of this magnitude possible. Individuals at these organizations formed an informal working group in order to share the knowledge they were gleaning on the nature of the new threat.
Malware samples believed to be associated with a new, and mostly unknown, botnet were seen across several honeypots in the wild. This quickly-growing botnet was not only observed infecting honeypots, but was also identified based on its continually growing footprint of scanning and brute-forcing activities.
Researchers at Akamai began analyzing the malware to reverse engineer its network protocols and capabilities. The discoveries we made related to communication strategies, command and control protocol structures, attack capabilities, attack traffic signatures, as well as other valuable data was collected, documented, and ultimately shared to aid in collaboration across the working group of researchers and their respective organizations.
These findings and information proved valuable in helping other organizations defend against the Mirai botnet as well as assisting the FBI to understand, correlate, and attribute attacks back to specific botnets and suspected DDoS-for-hire operations.
We at Akamai appreciate the FBI and DOJ for acknowledging our hard work on the Mirai botnet research and their continued efforts to help victims and organizations to combat cybercrime.
Together we can all do our part to help make and keep the Internet "Fast, Reliable, and Secure".
High fives to everyone involved!
Over the last few months, I've been talking to many development and test teams who deliver their sites and applications through the Akamai Intelligent Platform. One common challenge they face is how to test their Akamai delivery configurations on the Internet against their private development and QA environments behind the firewall. Most operate on a DevOps model with the goal of performing end-to-end testing throughout the software development lifecycle in order to find bugs and interoperability issues (e.g. misconfigured headers) earlier in the development process. As noted by Ron Patton in "Testing Software", the cost of finding a bug increases logarithmically as the development process progresses, so finding these issues early on in the process saves a lot of time and money. The historical challenge these teams have faced has been how to allow the Akamai delivery configuration access to these development and QA environments. Typically private and not exposed to the internet, the common approach has required a move into the DMZ.
The IETF had its 100th meeting the week of November 13. It was held in Singapore. I want to report on two pieces of good news.
The results are in, Black Friday and Cyber Monday broke all records in 2017 as the total revenue for these days exceeded $11.5 billion. Anticipating that more consumers would shop online, retailers invested in digital experiences and geared up for the holidays by (i) stocking fewer items in stores to reduce inventory costs and (ii) hiring few seasonal workers. Retailers' predictions were accurate, and their investment in digital experiences paid off, as close to 40% of the Black Friday revenue was generated via mobile devices.
We, at Akamai, typically see a huge surge in traffic on our platform on Black Friday and Cyber Monday, and this year was no exception. Using our mPulse technology to capture real user data and correlate web and mobile performance to user behavior, we observed an overall global increase in mobile device conversion rates in 2017. Our data highlights that retailers have understood and implemented strategies to improve the digital experience for their users, and that those investments are paying off, especially on mobile devices. Here are the key trends that we observed on our platform and which resulted in a successful holiday season: