Sunday, April 27, 2008

Why cloud computing is risky

I enjoy Nicholas Carr's blog Roughtype as he addresses a lot of issues IT professional's / vendors are not willing to look at because a) if they did they might have to change and b) if they did they might have to change. He's being doing some writing about cloud computing of late prompting my response to one of his posts. Nicholas believes IT Doesn't Matter. If you haven't heard of his articles and book on this subject you can look it up here.

He discusses the move of IT from a strategic model to a utility model where it is no less important than electricity. Similarly what company gains strategic value today from being hooked up to the power grid?

Here is my response to his blog post:

You say, "When the Amazon system was only used by the Amazon store, in contrast, its diversity factor and capacity utilization were woefully low - a trait it had in common with most private corporate IT operations." I see your point. If we liken IT to a utility model - the more customers you have with demand coming at different hours, different intervals, different volumes - the smoother your demand (I wonder how you translate power factor to IT?). Ultimately with an infinite number of customers spanning the globe your load will be flat, allowing you to right-size your supply vs. oversize to deal with spikes.

Correct me if I'm wrong but this can be achieved at the enterprise level - by consolidating loads from different applications with varying demands on the same physical box (virtualization of memory, cpus, networks, storage). It gets even better if the loads are global and on the same hardware. Of course if you extend this far enough you''ll achieve the same loads and efficiencies as Amazon. Today's virtualization technologies are functional/valuable but still immature. Give them time and the efficiencies will improve. All that to stay I think Enterprises have a way to go before they run out of ways to manage capacity/value better.

The downside of all of this, including all the cloud efforts is the underlying complexity. Not only is Google's hardware investment growing yearly so will the support infrastructure required (people and systems). This leads me to a different point. IT utilities are different from electrical. Electrical are geographically limited while IT is not. You just need to put bigger 'pipes' in and the data could be flowing to servers across the globe versus across the city. Clouds can balance load across a geographically distributed infrastructure. This becomes problematic when you consider that more complex systems have a higher tendency for catastrophic failure. What would happen if half the world's computers shut down at the same time? This will never happen with local computing (except at that location) and is why inefficiency is desirable as it's required for redundancy. Imagine if that power outage that affected parts of Eastern Canada plus the US East Coast a couple of years ago (due to the system's complexity) had affected 1/4 of the planet. Hmmm

No comments: