Akamai Diversity
Home > Web Performance > Is Your Infrastructure Thinking Too Hard? (Part 1)

Is Your Infrastructure Thinking Too Hard? (Part 1)

Akamai's Enterprise Architects regularly perform site assessments for customers to help maximize performance in terms of speed, scalability and reliability for their Akamai setup and their origin infrastructure. 

The first step in this process is what we call discovery, where we dig deep into the customer's web architecture and website. This allows us to understand the whole story of how the website functions, along with business goals and design decisions.

Out of all of the questions and answers during discovery, there's one particular question that consistently generates a large amount of conversation and high-value ideas. In fact, this question can be the launching point for a number of critical operational exercises like designing a new application, migrating to a new platform, troubleshooting or simply auditing: "How much of your infrastructure do you use when you serve a request?"

That is, how many of your databases, application servers, internal caches, load balancers and front-end servers have to spend time thinking about how to output the requested web page?

The answer varies depending on the request, but the important metric is "origin think time" or how long the origin (your servers) must think about a request once the request is made. If the page is cacheable and can be served directly by (say) Akamai at the edge of the network, the think time will be slim to nil. If it's not cacheable, like dynamic or uniquely-personalized content, your servers will spend a measurable amount of time processing the request.   

I recommend sitting down with a diagram of your web infrastructure and a spreadsheet, and asking the following questions:

•    What are the typical types of requests for non-cacheable data browsers make to my website? In a lot of cases, there might be only 5-6 general types of web pages on a typical site.
•    For each of those types, which and how many of my servers/processes need to process that request? e.g. 1 database server, 1 app server, 1 auth process (on app server).
•    How much wall-clock time does it take for my servers to process the request? Typically measured in seconds, and get numbers for both cold and warm browser caches, averaging multiple requests.
•    If you have the metrics, include a column for how many times a user might access that type of page during a typical session.

Here's an example of what a typical e-commerce site's think-time chart might look like:

Ringel Blog Image.png

This chart will be a good first step in identifying hot-spots in your infrastructure, and give you the insight you need to start doing triage on think-time issues.

In Part 2 of this article, I'll talk about some approaches you can use in your own web infrastructure to reduce origin think time.  

Matt Ringel is Enterprise Architect in Akamai's Professional Services team

Leave a comment