Akamai Diversity
Home > April 2012

April 2012 Archives

In the first part of this blog I examined the relationship of performance, user experience, and the competitive marketplace in financial services.  In this second part I will look at the ROI.  For Financial Services Institutions (FSIs), what is the payback for improving the performance of their web sites and mobile sites and apps?

Part Two: The ROI of Performance

There are many reasons why ROI is difficult to calculate.  I have dealt with this challenge as it relates to performance, user experience, and technology adoption for over 20 years, and across many segments of financial services.  The best way to deal with it is to dive right into an example, and then consider how to extend the example to other segments.

In Part One I showed how the brokerage segment is most keen on performance.  The brokerage segment also provides some very good metrics from which to build an ROI case. 

The first step is to identify the elements for our ROI case.  You can categorize these elements as income producing elements, and expense elements.  Personally, I prefer to first examine top line elements which grow the business, and can make the biggest impact for a firm.  In brokerage, the common elements I work with are:

1.    Daily Average Revenue Trades (DARTs)
2.    Account Openings
3.    Call Center Reduction
4.    International Users
5.    DDoS and intrusion mitigation
6.    Ad spending

The 4th Quarter, 2011 issue of the State of the Internet report marks the completion of the fourth year of the report's publication.  The report has come quite a long way since Akamai CMO Brad Rinklin walked me through some ideas for it in a PowerPoint deck back in 2008.  Customers and partners, as well as media and analysts, had been coming to Akamai seeking a host of data about Internet use so they could ensure that they were appropriately preparing for potential challenges and identifying opportunities for their online businesses. From our unique vantage point of delivering over 1 trillion Internet transactions daily, we have unparalleled access to just that kind of information, so we decided it would be a valuable service to the industry to bring it all together in a publicly available report. And so, in spring of 2008, the State of the Internet report was born.

Digging back into the archives, I found that the report's first issue had just 16 pages of content, while this most recent issue has over 50!  Over the report's history, the report has dropped some sections (such as per-capita data) while adding others (such as insight into SSL Ciphers).  Akamai's partner Ericsson has also been a key contributor to the report over the last year, providing its unique insight into the usage characteristics of mobile device users around the world.

The State of the Internet Report Turns Four

Four years ago, when Akamai launched the first State of the Internet Report, the iPhone had just launched, there was no iPad/tablet market and huge swaths of the globe still lacked basic Internet and mobile access.
 
Internet connectivity has changed drastically since then due to technological advances, and this has revolutionized the ways in which enterprises deliver content and consumers expect to consume it. Nowhere is this change more apparent today than the growth seen in the number of unique IP addresses connecting to the Akamai Intelligent Platform, growing from 329 million in the first quarter of 2008, to more than 628 million from 236 countries and regions in the fourth quarter of 2011.
The London Olympic Games are just around the corner. Akamai caught up with Brian Goldfarb of Microsoft Windows Azure Services to talk about the massive change since the last Olympic Games, device proliferation and consumer expectations

DASHing Into an Era of Convergence

San Francisco has a largely unknown place in the history of television. Back in 1927, on Green Street in the city, Philo Farnsworth had patented a method for showing moving pictures wirelessly. As a lone inventor, he was up against RCA, Westinghouse and Marconi. Each TV broadcaster at the time required a custom TV set to receive their signals. If you wanted to watch certain channels, you had to buy a set compatible with just those channels.

Skip forward ten years. Farnsworth prevailed in a decade-long legal battle with RCA but was never able to capitalize on his remarkable inventions (of which TV was just one of more than 300 patents issued). The broadcast signals were still incompatible. Reason prevailed finally in 1941 with the establishment of the NTSC standard, which harmonized all the broadcast formats at the time. NTSC was the foundation on which America's broadcasting industry and the behomoths of ABC, CBS, and NBC were built. 

Today, with streaming media, we find ourselves back in 1927. There are three main adaptive segmented formats - Apple's HLS, Microsoft's Smooth Streaming and Adobe's Dynamic Streaming. They are 80% the same, yet 100% incompatible. To view HLS, you must have a player for that format. For HDS, another player and for SmoothHD, a third.  This fractured delivery space forces encoders, delivery networks and client players to spread their development efforts across all these formats, forgoing optimizations that could be achieved by converging around a single format.
 
There is now a new streaming format on the block - MPEG-DASH. Not another format you moan - won't that make things worse? Perhaps not. DASH is different. Rather than being the proprietary solution of any one company, it is an international ISO standard, compiled by the Motion Picture Experts Group (the same people who brought you MPEG2 and MP4) and ratified as ISO 23009-1. It's goal, to continue our story, is to be the NTSC of the streaming world and to foster the same growth in the video-over-IP industry we saw in the broadcast world.
 
Here at NAB 2012, there is a good amount of chatter about DASH. Will spoke with Andy Plesser of BeetTV about DASH - watch the video here:
The purpose of DASH, which stands for Dynamic Adaptive Streaming over HTTP, is to provide a format to simplify and converge the delivery of IP video. As it gains wider adoption over the coming years, it will improve client and network interoperability, enable content providers to spend less time and money on backend compatibility and more on compelling content, support common encryption, and allow for streaming content to adapt to network and client health. DASH demo at NAB 2012.jpg
 
From my perspective, one of the exciting elements of DASH is its promise of convergence - particularly in this era of hyper-connectivity. Consider the range of devices currently in use today: you have PCs, TVs, laptops, set-top boxes, game consoles, tablets and mobile phones. To deliver content features to these devices, we need to add each feature for each type of device depending on what format is supported. Multiply that by all of the members of the content ecosystem and all of the potential features, and you can imagine the impact this matrix of inefficiency has on our industry. With DASH, you have a single format that can be supported across a common ecosystem of content and services, all the way from the encoder down the chain to the end consumer. The time/cost savings it presents will inevitably translate into an industry with a deeper feature set and a steeper innovation curve.
 
So while it's too early to tell if DASH will succeed in its goals, we at Akamai are excited about its promise for the industry. As members of the MPEG-DASH Promoter's Group (http://dashpg.com), we'll continue to push for its broad adoption.
 
If you're at NAB, come see me to talk more about DASH. I'll be at the Akamai booth (#SL8124) where we'll be showing live demos (see one now if you like http://tinyurl.com/dash4you ) of how DASH works over the general Internet. Faster forward!
 
Will Law is Principal Architect at Akamai.
We caught up with Will Richmond of VideoNuze yesterday at NAB to talk about the latest in innovation around new business models for the broadcast industry. Will talks opportunities, challenges and what the consumer wants in terms of accessing their entertainment libraries. Hear from Will on the below video!

One year ago, at NAB 2011, TV Everywhere was talked about... well... everywhere. Operators and programmers were racing to secure their strategies to stay relevant in the Media 3.0 world and take advantage of new business models; and after enjoying a few years of vigorous discussion, months went by with stalled progress, leaving many to wonder what happened.

Operators want to bundle content to their throngs of paying subscribers, and programmers want to maintain their own destination channels to increase brand recognition amongst consumers. No one wants to give in first. So the compromises became something in the middle, which no one was entirely pleased with.

As it stands, operators and programmers both share in the revenue of Internet video delivery, but no one party truly owns the whole customer relationship. Furthermore, there are questions abound about who possesses the valuable viewership data. There's complexity with these business deals, and even once they are put in place, it takes time and resources to get the solutions to market.

One year later, in time for NAB 2012, we're seeing renewed vigor around the TV Everywhere discussion. The various players are realizing that their time is up - regardless of whether a happy resolution has been reached, consumers are demanding access to their content on all devices now. That has left these parties scrambling to deliver the content subscribers are paying for to their many devices, and needing a solution provider that enables them to do so quickly, securely and at scale.

The formula to a successful TV Everywhere implementation combines multi-device delivery & security, measurement and subscriber authentication/authorization. In order to enable any of those services, a flexible cloud-based technology solution is required to rapidly and repeatedly reach the wide breadth of consumer devices. Without that flexibility, the time to market is slowed and paying subscriber engagement is potentially lost.

What are your predictions for TV Everywhere in the next 6-12 months? Your comments are welcomed.

[Akamai announced its TV Everywhere services at NAB last year and will be demoing live client applications this year at Booth #SL8124.]

Adam Greenbaum leads Akamai's efforts around TV Everywhere...

Are Data Center CRACs Going the Way of the Ice Box?

Cooling systems represent about seventy percent of a data center's total non-IT energy consumption.  Eliminating cooling mechanicals, e.g., CRAC*s and chillers, would be a significant step towards major energy and cost savings when you consider that many data centers consume hundreds and thousands of kilowatts of power - oodles more than office space.  But in regions with hot and/or humid climates isn't mechanical air conditioning a necessity to keep IT equipment humming? 

try again.PNG

Not so anymore according to The Green Grid (TGG).  At The Green Grid conference in early March TGG announced its updated free-cooling maps based on the new American Society of Heating, Refrigeration, and Air conditioning Engineers (ASHRAE) guidelines for allowable temperature and humidity ranges for various classes of IT equipment.  These maps show where in the world and for how many hours per year outside air can be used for cooling ("air-side economization").  There's a lot of discussion in the whitepaper about dry bulb and dewpoint temperatures and psychrometric graphs that I won't bore you with.  The net-net is depending on the ASHRAE classification of IT equipment in use, A1-A4 with A4 being the most heat- and humidity-tolerant, free-cooling can be used year round in 75%-100% of North America and  greater than 97% of Europe, even with temperatures as high as 113°F!  Japan's environment is more challenging at 14%-90%.   The maps below, reproduced from TGG's whitepaper, show free cooling ranges for the more delicate A2-classified IT equipment.  In full disclosure, to achieve 100% free cooling in some locations, operators must be okay with occasional incursions into heat and humidity ranges outside the recommended ARSHRAE ranges.  But when one crosses the infrequency and short duration of these incursions with the risk of failure of the IT equipment and compares against the CAPEX and OPEX savings of doing without mechanical cooling, it's certainly worth a look.  

Still not convinced?  Big data center operators aren't waiting to put theory into practice.  eBay is operating its new Phoenix data center with 100% free cooling year round even during 115° days!  And Facebook's state of the art Prineville data center in Oregon was built for only free cooling.  I know, they deploy masses of servers on a monthly basis and don't have voided equipment warranties to worry about.  But most technology refreshes happen on a three year time frame, not too far off to assess your free cooling options for the next planning cycle.  And consider that just turning on air-side economization was found to save an average of 20% in money, energy and carbon.

* CRAC = computer room air conditioner