Before we dive straight into the magical unicorn from heaven that is serverless computing embedded within the CDN edge (a direct customer quote that I want on a team T-shirt soon), let's first level-set on some basic concepts of computing. In the context of web experiences, IoT device messaging, and really anything else that travels across the public internet, compute can happen in three venues:
- On-premises (your data center)
- Centralized compute clouds (pick your favorite public cloud)
- Distributed edge clouds (pick your favorite CDN -- ahem, Akamai!
Each of these venues has distinct variables that content owners should consider carefully, including:
- Capital expenditures (CapEx)
- Server operations and management
- Geographic footprint
- Proximity of your compute venue to your end users' devices
Which combination is right for your application and user experience needs? When we look at the proximity of these compute venues to your devices and end users, pulling forward business logic and contextual data would be highly desirable too (just like you have done with content for years).
It would be great if we could just focus on user experience, IoT message transactions, or data transformation functions without having to worry about managing all these other concerns.
Thankfully, this is exactly what serverless computing solves!
What is serverless computing?
Serverless computing -- also known as function as a service (FaaS) -- is a zero-management computing environment that allows developers to deploy and execute event-driven logic and contextual data without having to manage and maintain the underlying infrastructure.
Serverless environments typically exist within centralized compute clouds or edge clouds and offer pricing models based on the resources that applications actually consume.
What is serverless good for?
The key benefits of serverless include:
- Eliminating infrastructure maintenance tasks and shifting operational responsibilities to a cloud or edge vendor
- Autoscaling to avoid the need to build out extra capacity in advance
- Freeing up developers so they can focus on building and running great applications and services without thinking about servers
This does sound like a magical unicorn, doesn't it? Let's look a bit deeper at some of the benefits and challenges of serverless.
Serverless offers a lot of awesome benefits for developers, essentially freeing them up to focus on the digital experience's key features, core capabilities, and secret sauce. Additionally, serverless environments bring with them scale, reliability, and cost efficiency -- you only pay for what you use.
To harness all this goodness, serverless environments traditionally provide a compute framework with programming language support, a read/write data store, and developer tools that assist in code management, activation, and monitoring.
The fine print
Unfortunately, just like everything good in life, there is a bit of fine print -- not every vendor's serverless environment is actually that magical.
Going back to our compute venue diagram above, serverless can live in either a centralized cloud or an edge cloud. So how do you know which is the right option for your logic?
If you are a performance fan like I am and have latency-sensitive workloads -- regardless of where in the world your users or devices are -- serverless at a centralized cloud provider may not be what you are looking for. Serverless in a centralized cloud suffers from slow cold starts (aka how long it takes to "boot up" your code), only exists in specific "availability regions" sprinkled across the globe, and often stores data very far from both your function and the user's device. Not great.
The ideal solution
However, all is not lost. The edge cloud is the ideal solution to remedy all of these challenges.
A serverless environment that is embedded within a CDN's footprint provides all the goodness developers expect, including:
- Scale and Reliability: Provides a single global network that protects against network congestion, and offers reliability and scale with no associated management overhead
- Distributed Compute: Globally distributed edge nodes allow you to execute code closer to the device and end user, resulting in decreased latency, decreased bandwidth, and increased origin offload
- Data Processed Where It's Created: Business logic to process data right at the edge, closer to the device, to support latency-sensitive workloads and compliance regulations
- Global Regions: Eliminates the need to manage multiple serverless "availability regions" because every edge node is a serverless compute node
I often hear, "isn't serverless computing just edge computing in disguise?"
At Akamai, we've spent the past 20 years building functions as a service that have deployed dynamic content assembly, security protections, bot management, and much more to the edge closer to end users' devices. Customers have been configuring and deploying these functions as part of their content delivery services for many years.
But perhaps not ...
As modernization of back-end architectures drives monolith applications to decompose into microservices and functions, the proliferation of IoT devices in our daily lives drives low-latency data processing use cases and the desire to consider everything in the tech stack as "just more code," we need to offer you, the developer, another choice to continue innovating on the edge for the next 20 years -- a serverless platform where your code, your data, and your automation can live in harmony.
Akamai's serverless compute platform, EdgeWorkers + DevTools, is the intersection of serverless computing and content delivery, providing the best of both worlds: performance and productivity.
I don't know about you, but I'm ready to take this magical unicorn from heaven for a spin. To learn more about how you can get started with Akamai's serverless platform, or to investigate all of the features deployable to the Akamai Intelligent Edge Platform, check us out at developer.akamai.com.