Saturday
Jul262008

The (Futile) Search for Fungibility in Cloud Computing

While catching up on some reading (typical for early Saturday mornings), I came across a number of posts about the Cloud Computing Camp held earlier this month in London. Among them, this post from Phil Wainewright re-surfaced a point that's irritated me for years. It's a point that Phil (and others like Dan Farber) seem willing to ignore for the sake of a nice looking metaphor.

Phil, Dan and others rightfully take issue with the notion of "utility computing" in which one draws (in an on-demand fashion) compute power from a commercial source ... At this point, some overzealous commentator (not Phil, and not Dan) invokes the "electricity grid" and "power utility" image, from which we draw electricity with the simple act of plugging into a wall socket and paying the bill at the end of the month.

What's irritating is that so many people fall into this trap, and try to make the case that there's no standard definition of units (like kilowatt hour or barrel of oil) on which to price. The discussion (or argument) which then ensues seems pointless to me. The mix of CPU-RAM hours, GBs of storage and MBs of communication are simply not sufficient to describe and distinguish the use of "utility compute resources".

If I were to turn the metaphor on its head, I would be looking for such a highly configurable, individually customized source of electric power, that it might sound like:

I would like to purchase and have delivered to 1234 Elm Street, Anytown USA the following service: three-phase, alternating current at 47 cycles per second, with a deviation of no more than 2 cycles per second in any 5 minute interval, beginning with 2.245 amps during the daylight hours in Kyoto (adjusted seasonally) and scalable in night time to as much as 3.14159 amps.


The point is that, independent of finding a convenient pricing unit, the resource(s) we make use of from a compute utility are rarely, if ever, fungible when we go beyond the most basic parts of the infrastructure.

Utility computing, cloud computing, and (oh no!! he's going to say it!!) grid computing are examples of mass customization, applied to the definition and delivery of a production facility -- a factory. The value of the service resides in the ability of the customer to (easily and successfully) specify the requirement, act as his own "designer" of the facility, and then make use of it. The price point has a lot LESS to do with the cost of an hour of dual-core 1.8 MHz X86 with 8 Gigs of RAM, and a lot more to do with the configuration, operation, administration and management of the customer's "personalized" product facility. There are damn few standard pricing units for the construction and use of a factory.

There.... I feel better now.

fungible - Definition from the Merriam-Webster Online Dictionary

being of such a nature that one part or quantity may be replaced by another equal part or quantity in the satisfaction of an obligation

Mass customization - Wikipedia
Mass customization, in marketing, manufacturing, and management, is the use of flexible computer-aided manufacturing systems to produce custom output. Those systems combine the low unit costs of mass production processes with the flexibility of individual customization.

Sunday
Jul202008

De-nebulating "Cloud Computing"

While catching up on my reading (which is pretty daunting when Google Reader tells me that my "high priority" collection of virtualization and utility computing feeds is over 1000 new posts), I came across Alistair Croll's nine sector view of cloud computing.

Taking a look at that post, prompted me to revisit John Willis' post from February and the wealth of high quality comments he elicited. John's post, and now Alistair's, represent great "locations" in the blogosphere at which knowledgeable advocates and the loyal opposition convene to bring clarity to the conversation. What I also enjoy is that I've had and continue to have the privilege of knowing personally and working with so many of the participants.

I'm struck, as well, by what seems to be a gap ... or maybe several ...
in their lists. And, being an amateur taxonomist and incorrigible entrepreneur, I view a gap as a puzzle to be solved and a potential market to be served. I'll take the time over the next few days to reflect on the gaps, and then pose a couple of questions and see if I can add to the fun. I'll be gratified if the result adds to the conversation established by John and Alistair, as well as those raised by James Urquhart, Greg Ness, Bert Armijo, Dave Durkee, and Rich Wellner (among others). (I'm most appreciative of Bert's most recent posts as well as the fun poked at the Cloud Computing Expo's Twenty Experts Define Cloud Computing piece.)

Inside the Cloud: 9 Sectors to Watch - GigaOM

There’s already a ton of activity taking place in the cloud computing space, so much so that it can be hard to know who to watch. In many cases, it’s too early to pick winners. But there are distinct sectors of the IT industry that are particularly well suited to the on-demand, pay-as-you-go economics of cloud computing. Here are eight segments — and one company that’s a segment all its own — that we’re tracking closely.

Sunday
Jul062008

MyCMDB - the CMDB as a Wikipedia Plug-in to FaceBook

At the risk of piling on, I'll join the refrain regarding the recently announced MyCMDB from Managed Objects. As described, it makes no sense to me. I can't for the life of me figure out how one uses social networking and the "principles of Web 2.0" to solve the CMDB data accuracy and completeness problems.

myCMDB - Managed Objects

... Managed Objects myCMDB™ solves CMDB data accuracy and accessibility issues incumbent with today's CMDB implementations. By integrating principles of Web 2.0 and social networking into a new web-based application, myCMDB delivers role-based “communities” where users can more easily and effectively view and interact with CMDB data – and other CMDB users as well. ...

Sunday
Jul062008

Why Cloudware and why now?

In September of last year, as I was preparing (mentally and emotionally) to get Replicate started on its current path, I considered issues of portability and interoperability in the virtualized datacenter. I posted a few comments about OVF but one in particular drew the attention of Bert Armijo of 3tera.

At that time, Bert indicated that he thought it "... too early for a standard,...", with a (perfectly arguable) claim that standards are often "... a trade-off to gain interoperability in exchange for stifling innovation." He went on to say that "(w)e haven't adequately explored the possibilities in utility computing." He then provided a critique of OVF. (Whether I agree with that critique or not is immaterial to this post, and the subject for another time.)

At the end of June, 3tera announced their Cloudware vision for a standards-based interoperable utility infrastructure. Since the arrival of Cloudware, there have been a number of venues at which "cloud computing" and interoperability has been on the minds of the cognoscenti... Structure08 and Velocity being the most heavily covered. In the past few weeks, there have also been claims, and counter-claims of support... and to be fair, the disputed claims of support were made by others, not by 3tera.

So... what's changed, Bert? Why is "now the time" to create the standard for interoperable cloud computing? What's happened in 9 - 10 months that has so changed the field, that these efforts don't also stifle innovation?

Simon Wardley has also reiterated his position most recently at OpenTech regarding substitutability between utility providers (which includes portability and interoperability) ... an outcome which he maintains will require not just open standards but open source standards. When compared to the Cloudware initiative, I can more easily support this "pure form" of standard creation. The commercial success of pure, open source standard approach for utility computing, however, requires a reasonably well-established reference implementation or some acknowledged leader as the de facto standard. (Again, the topic for yet another post.)

That said, Simon and I could not be more in agreement when he states that "... standards will emerge through competition and adoption rather than committee." I'd probably add to that statement that such standards don't (often) emerge as a result of the smaller, fragmented commercial interests banding together to form a "composite" competitor to a market leader.

I have to agree with John Willis
when he states that "...what we today call the 'cloud' will really just
evolve into a complex IT infrastructure ... which will link services
from a myriad of inter connected inter-operable applications spanning
internal legacy applications, internal/external virtual resources,
private clouds and public clouds." (Full quote provided below.)

Head In The Clouds | 3Tera

Well I’m happy to say that I think the time has come when we have enough companies in the space working on creative products and services that a standard can progress productively. We’ve begun to share our vision for what that standard can achieve, it’s called Cloudware, and covers not only AppLogic but a whole new way to approach infrastructure.
john m willis ESM Enterprise System Management Blog
It is my belief that what we today call the “cloud” will really just evolve into a complex IT infrastructure of the future, and in the end, will just be referred to as infrastructure. There is no doubt the traditional IT landscape of the last 20 years is going through a substantial transformation on the same scale as what happened in the mid 1980’s as mainframe resources shifted to distributed computing and client server architectures.

This new complex IT infrastructure of the future will link services from a myriad of inter connected inter-operable applications spanning internal legacy applications, internal/external virtual resources, private clouds, and public clouds. For example, I can envision a scenario where a business service runs internal behind-the-firewall VMware instances for parts of an application and possibly inter-operates with resources on Amazon’s EC2, Flexiscale, Google’s App Engine, or a player to be named later. These same business services might also use resources from private internal clouds running 3Tera’s Applogic, IBM’s Blue Cloud, or Cassatt’s Active Power Management. Like it or not, Microsoft will have resources involved in this new IT management infrastructure of the future. Any interoperability discussion will need to include them as well. ...

Friday
Jun132008

Jurisdiction - where in the world is that VM?

James Urquhart has an interesting post on a topic that's fascinated me for a long time -- namely, under what legal jurisdiction does a computed "transaction" take place?

The problem first came to my attention (sometime during the last ice age) with the advent of ATM machines with services offered by national banking and credit card concerns. If I withdrew money or paid a credit card bill at the ATM, exactly where (for the purposes of the relevant legal jurisdiction) did the transaction take place? Banking laws being what they are, the industry got around a host of problems by declaring an ATM machine to be a "branch bank", in order to make sure that the geographic location at which the financial transaction took place made it clear for purposes of law.

The days of dumb terminals and thin client computing brought with it a boatload of jurisdictional issues. And now, cloud computing and virtual server migration add to the puzzle. It's a great problem on which to reflect. James' discussion is well grounded and presents the salient issues in a very nice way.

The Wisdom of Clouds: "Follow the law" computing

A few days ago, Nick Carr worked his usual magic in analyzing Bill Thompson's keen observation that every element of "the cloud" eventually boils down to a physical element in a physical location with real geopolitical and legal influences. This problem was first brought to my attention in a blog post by Leslie Poston noting that the Canadian government has refused to allow public IT projects to use US-based hosting environments for fear of security breaches authorized via the Patriot Act.