« LISP, Resilience and the InterCloud | Main | Thinking about Technology Development, Darwinism & Intent »
Monday
Feb092009

Geva Perry and Cloud Computing Standardization

I really enjoy Geva Perry's writing, and whether I'm in agreement or otherwise, he's generally dealing out cogent argument.  However, in the matter of standardization, his recent post can't go without remark.

The argument that cloud computing is too immature and unformed for standards efforts to be effective is one thing.  The argument that such efforts if engaged in too early are harmful to the industry is yet another. 

Geva makes the case for letting the community or a market identify the de facto standard, and then take up the momentum in a formal standardization effort to solidify it.  That might well be a reasonable approach for some of the examples he cites (which are notably programming-oriented), but not necessarily for cloud computing/interclouds.  In particular, when the standard in question is, at its core, a vehicle for transportation or communication among autonomous actors, the de facto, let-the-strongest-approach win is NOT necessarily recommended. 

One need look no further than the mobile telephony environment in the US and (most of) the rest of the world.  GSM was the agreed upon creation of a formal standardization effort.  By pretty much all measures, it has created an incredibly fruitful industry.  One can argue that, while it has its flaws, the amount of time required to advance the wireless technology has been in great part a function of trying to jam together the GSM and CDMA technical standards and their respective infrastructures.

The requirements of and objectives for a standard should be enunciated as early as possible in order to provide an objective measure of success, and to provide an orderly process of advancement.  The history of the internet's TCP/IP suite provides a good example of how to do it well, in contrast to many of the heavyweight OSI examples.  Yes, unwieldy, unused standards are clearly a danger.  But the characterization of formal, de jure standards as being the spawn of political and corporate machinations, while recognizable, is overly dramatic.  The point is, inappropriate, "bad" standards are NOT the predestined outcome of every formal standardization effort... even the "early" ones.

(To be fair to ISO/CCITT endeavors, I would credit OSI for some GREAT contributions.  What come to mind immediately are the highly beneficial reference model and the ASN.1 approach to representation of upper layer protocols, both of which have been a boon to the datacommunication and telecommunication industries. )

Arguably, the best standards are those that don't try to over-reach, keep focused on the "what" as opposed to the "how."  But the best standards in communications are those which started early, set out the universe of discourse, and provided every participant with an unambiguous means of expression and representation.  So, I respectfully disagree with Geva on this one.

Thinking Out Cloud: Beware Premature Elaboration (of Cloud Standards)

There is another reason to avoid forcing a formal standardization process in cloud computing. It's simply not the ideal way to create a standard. There are two kinds of standards: de jure and de facto. Both phrases are from the Latin, the former meaning "by law" and the latter "by fact" --or in other words, there are official standards and standards that were not defined formally, yet have become standards in daily practice.

Formal standards are the ultimate form of "design by committee." They require compromises and involve politics and ulterior motives. They also tend to be too slow and bureaucratic. They are not the optimal way to come up with the best solution the market needs for a particular problems. The marketplace is the best tool for that (and when I say "marketplace" I also include free open source). As a particular class of technology matures, it is actually good to have a number of competing approaches and let the users vote with their feet for the one that best suits their needs.

...

In summary, the correct approach to developing standards is to first let the marketplace converge on de facto standards. A formal standards body can step in at a later stage to tie up loose ends and manage a more stable process.

Reader Comments (2)

Hi, Rich. I enjoyed this post. For some strange reason, I find the issue of standards fascinating...

You make a good argument, but I think your GSM example actually strengthens my point and raises another issue I have not discussed in my post. First, I would argue that although GSM technology itself was new when the standard came out (and I'm no expert on the topic, just based on what you wrote), the class of technology -- cellular telephony -- was fairly mature. So the industry had a very good understanding at that point of the challenges that the standard needs to address. In fact, you could say that it followed the pattern I was describing, which is that in the early stages of evolution there were a number of competing technologies, and as the industry matured they were ready to converge on a standard -- in this case, GSM.

TCP/IP is a an altogether different story and has a much more complicated history, but I won't get into that here.

Second, and I am thinking out loud here, it seems to me that the question of how early to introduce a standard has to do with the importance of interoperability as a barrier of adoption for that particular class of technology, or to put it differently, the value of the "network effect".

Clearly, in telecommunications it has huge value and lack of interoperability would be a high barrier to entry. Cloud interoperability, as important as it is in the long run, does not seem to me to be as big an adoption inhibitor at this point as it was in telecom.

Geva Perryhttp://gevaperry.typepad.com
Feb 10, 2009 at 6:54AM | Unregistered CommenterGeva Perry
Geva, many thanks for your comment. I, too, have a somewhat morbid fascination with standards (both de jure and de facto).

As for the GSM standardization, this took a significant length of time, but I'd argue that the technologies (plural) involved were about as well developed as many we're considering with cloud computing interoperability.

As for TCP/IP and OSI -- I'm speaking from some real-life experience. I spent more years of my life than I care to think about involved in aspects of both. TCP/IP, established a reasonable means of discourse, and a very pragmatic approach to standard compliance.

In both cases, the communities of interest addressed the sets of issues early on, with full knowledge that by disambiguating concepts and "compartmentalizing" the standards efforts, the process allowed critical aspects to proceed at a faster pace, without seriously jeopardizing the more difficult (i.e. time consuming) or the less vital.

Perhaps we're not speaking of hard "standards" as much as standardized nomenclature, unambiguous representation and extensible definition.

I'm not certain I agree with you that lack of interoperability will cause no "high barrier to entry." As in data communications, lack of interoperability does create a barrier to adoption by both end-users and providers. It therefore encourages, too often, the promotion of a single (large) company's proprietary approach.

I have to believe that the concern by potential cloud users regarding "lock-in" as a barrier to adoption is as related to interoperability as as it is to migration.

Perhaps the point is not so much interoperability's relevance to near-term adoption as it is the elapsed time required to get interoperability to be "done right." Let's work backwards from the point in time where we BOTH would agree that interoperability is a significant barrier of some kind: I suggest to you that doing the groundwork required for interoperability standards needs to start now, and in earnest.
Feb 10, 2009 at 7:28AM | Unregistered CommenterRich Miller

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.