Your cloud-hosted workloads are your "main event". They are your commerce website, your CRM application, your collaboration tools, your partner portal, your corporate website, etc - the engines that drive your business and enable your organization to connect, collaborate, and transact business across a broad group of employees, customers, partners, suppliers, and more. Unlike the making of a motion picture, where you can rehearse and do as many re-takes as needed, your cloud-hosted workloads need to be up-and-running 24 x 7 x 365. Downtime is not an option for your end-users. When operating in the cloud, this is easier said than done. The availability of the IaaS/PaaS services are completely out of your control. So how do you ensure your workloads stay online, even when your cloud provider goes down?
Get In Touch
Recently in CDN Category
Do you remember playing capture the flag as a kid? I sure do! My friends and I would split up into even teams - usually about 6 kids per team. Then each team would hide our precious flag on our side of the backyard. Now the important "strategery" comes into play once the game actually begins, as I found the winning formula was to make sure my teammates all spread out and stayed as far away from each other as we could. That way, we avoided getting caught when we entered enemy territory (i.e. the other side of the backyard). Some teams would stick together in groups of 2-3 to chit-chat while they snuck around the backyard, making them an easy target! The principle here is a familiar one: don't put all your eggs in one basket. Well, cloud computing has created the largest IT egg basket the world has ever known. The aggregation of thousands and thousands of workloads into a few, very large data centers has made these workloads sitting ducks for attackers. Just like my chatty friends, cloud-hosted workloads are easy to find and much more vulnerable than in the old days when each company kept their workloads in their own private data centers. I'm certainly not proposing that we go back to the old days of IT, but it's important to consider how we can combine the cost and efficiency benefits of cloud without increasing our attack surface and adding new vulnerabilities into our security perimeter.
In the words of the great Peter Drucker "if you can't measure it, you can't manage it". This is especially true when it comes to managing performance in the Cloud. Most organizations rely on the standard performance monitor tooling that is offered by their Cloud Provider, which only provides basic insights into the health of the infrastructure within the data center. But what about what's happening outside the data center? Do you have complete end-to-end visibility into all your actual users and the performance they are experiencing? And then how do you optimize the experience for your end-users?
Or is it?! Do you feel burnt out on hearing from vendors who don't have any new or interesting perspectives to share on the seemingly overplayed topic of Cloud? As a former IT Manager myself, I feel you. That's why I sought out an expert, Jason Fuller, to share his insights and best practices for designing cloud architectures. I wanted to get an unbiased perspective from a guy who has the battle scars to show for his many ups-and-downs as an IT executive managing mission critical cloud environments for large global enterprises.
Significant technology and usage changes have emerged since the initial publication of the practical guide to web resource caching. This updated edition revisits the recommendations issued earlier and brings new focus on:
- Fast Purge, the ability to invalidate or delete assets from the Akamai network in five seconds, and
- API usage, prevalent in native mobile and single page applications.
One of the most frustrating experiences online is waiting for a page to load, or trying to complete a transaction for that 'must have' item, and being greeted with an unresponsive screen. In fact, Akamai's 2015 Performance Matters report found that 49% of consumers expect a page to load in two seconds or less. As consumers' expectations for page load-speed increases, their patience for slow-loading websites decreases. Currently, only 51% of consumers "wait patiently" for a website to load, compared to 63% five years ago.
This blog post is part of an ongoing series where we will discuss a wide range of H2-related topics. In today's post, we talk about some of the misconceptions regarding HTTP/2 being a silver bullet for improved website performance.
It has now been five years since World IPv6 Day and four years since World IPv6 Launch. The long-term global Internet transition to IPv6 is well underway and increasingly entering the mainstream. The American Registry for Internet Numbers (ARIN) exhausted its free pool of IPv4 addresses in September 2015, following all of the other registries except for Africa's AFRINIC (which is on track to exhaust its IPv4 free pool in 2018). The result is that businesses and service providers needing Internet addresses for their mobile users, broadband users, business offices, servers, or cloud infrastructure now need to purchase IPv4 addresses on a transfer market, use IPv4 NAT (network address translation) with corresponding costs and complexity, or make a strategic decision to leverage IPv6.
Apple's upcoming App Store submission requirement around supporting IPv6-only environments (announced last year at WWDC and being enforced starting June 1) has been getting plenty of recent coverage. iOS application developers already need to make sure their applications work in IPv6-only environments with NAT64+DNS64; however, this by itself does not mean that those applications (or web-based applications) obtain content over native IPv6.
It was March 13th, 2:30 AM at night and the customer called everyone on the Akamai account team announcing they were being attacked. The attacker was locking inventory on their site for hours causing a significant burst in traffic and preventing customers from making transactions. The Akamai Security Operations Center was involved right away and quickly discovered that a bot was behind this attack. This was a "good bot" just scraping the inventory for pricing data, but it caused havoc for both the infrastructure and the business.