Tuesday
Feb102009

Hoff to Cohen to ... Watching a Meme and its Wake

Reuven Cohen of Enomaly fame took a nugget from Christofer Hoff and really worked it well.  What I like as much (if not more) than the notion of an IPv6 VLAN overlay on IPv4, is this statement: 

"The cloud is a kind of "multiverse" where the rules of nature can continually be rewritten using quarantined virtual worlds within other virtual worlds (aka virtualization)."
This is precisely the recognition the founders of Replicate, Rich Pelavin and Ken Novak, had several years ago and was the basis of the company's first offering... a networking "lab" offered as a service for developers who needed to test out their multi-tiered applications when distributed within various and assorted network configurations.  They just provisioned a virtual network appliance -- router, switch, load balancer, firewall -- virtually reconfigured the "cabling" and ... voila. 

It's wonderful to see lightbulbs going on, triggered by other lightbulbs.

ElasticVapor :: Life in the Cloud: The Hybrid Cloud Multiverse (IPv6 VLANS)

  ...one of the great things about cloud computing is in its ability to virtualize everything. The cloud is a kind of "multiverse"
where the rules of nature can continually be rewritten using
quarantined virtual worlds within other virtual worlds (aka
virtualization). The need for a traditional physical piece of hardware
is no longer a requirement or necessary.

For example VLANs don't need to differentiate between IPv4 and IPv6; the deployment is just dual-stack, as Ethernet is without VLANs. So why not just use modern VLAN technology to "overlay" IPv6 links onto existing IPv4 links? This can be achieved without needing any changes to the IPv4 configuration and allows for seamless and secure cloud networking while also allowing for all the wonders that IPv6 brings. It's in a sense the best of both worlds, the old with the new.

A VLAN based IPv6 overlay offers several interesting aspects such network security that is directly integrated into the design of the IPv6 architecture. (Security being one of the biggest limitations to broader cloud adoption) IPv6 also implements a feature that simplifies aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. It's almost like the designers of IPv6 envisioned the hybrid cloud model.

Thanks for the inspiration Hoff, looking forward to trying this out.

Tuesday
Feb102009

LISP, Resilience and the InterCloud

Catching up on some blog reading, I'm pleased to see Lori MacVittie's post on interoperability between clouds, but not for the reasons one might think. 

First, her description of LISP and it's similarity in function to UDDI was definitely helpful.  (Thanks!)  It makes clear to me one aspect of the necessarily independent set of core intercloud services that are likely to be necessary in any pragmatic solution for workload interoperability. 

The point she then makes about its lack of applicability... or, more correctly, its lack of sufficiency ... for internal workload mobility is a point we've been making at Replicate for as long as the company's existed.  What's required for workload mobility in the virtualized datacenter is "infrastructure resiliency" ... a topic I've written on in the past.  That is, the virtualized and physical access network must necessarily provide a defined connectivity, resilience (absence of single points of failure), security and performance to a set of individually administered resources.  The fact is... workload mobility within the datacenter is probably best defined as moving a "flock of resources" in such a way that, upon reaching their destination(s), the resources retain appropriate connectivity (at the various network levels), and the other measures of "goodness."  Providing the abstraction and then (under the covers) assuring that this is the end-result demands more than a configure-it-once-and-forget infrastructure. 

Finally, the organization of LISP within an intercloud environment implies the existence and availability of a publicly accessible "reverse DNS service." It seems to me that (to Lori's point) one of the key deliverables of a cloud interoperability standard would be to detail how the LISP offer (from the intercloud service environment) safely and reliably interworks with the  datacenter's internal means of "herding the flocks" of resources that define a workload.  Put that on the list of deliverables for the intercloud interop standards.

Just saying.

Interoperability between clouds requires more than just VM portability

If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation.

...

Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa.

... One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment.

Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. ...

Monday
Feb092009

Geva Perry and Cloud Computing Standardization

I really enjoy Geva Perry's writing, and whether I'm in agreement or otherwise, he's generally dealing out cogent argument.  However, in the matter of standardization, his recent post can't go without remark.

The argument that cloud computing is too immature and unformed for standards efforts to be effective is one thing.  The argument that such efforts if engaged in too early are harmful to the industry is yet another. 

Geva makes the case for letting the community or a market identify the de facto standard, and then take up the momentum in a formal standardization effort to solidify it.  That might well be a reasonable approach for some of the examples he cites (which are notably programming-oriented), but not necessarily for cloud computing/interclouds.  In particular, when the standard in question is, at its core, a vehicle for transportation or communication among autonomous actors, the de facto, let-the-strongest-approach win is NOT necessarily recommended. 

One need look no further than the mobile telephony environment in the US and (most of) the rest of the world.  GSM was the agreed upon creation of a formal standardization effort.  By pretty much all measures, it has created an incredibly fruitful industry.  One can argue that, while it has its flaws, the amount of time required to advance the wireless technology has been in great part a function of trying to jam together the GSM and CDMA technical standards and their respective infrastructures.

The requirements of and objectives for a standard should be enunciated as early as possible in order to provide an objective measure of success, and to provide an orderly process of advancement.  The history of the internet's TCP/IP suite provides a good example of how to do it well, in contrast to many of the heavyweight OSI examples.  Yes, unwieldy, unused standards are clearly a danger.  But the characterization of formal, de jure standards as being the spawn of political and corporate machinations, while recognizable, is overly dramatic.  The point is, inappropriate, "bad" standards are NOT the predestined outcome of every formal standardization effort... even the "early" ones.

(To be fair to ISO/CCITT endeavors, I would credit OSI for some GREAT contributions.  What come to mind immediately are the highly beneficial reference model and the ASN.1 approach to representation of upper layer protocols, both of which have been a boon to the datacommunication and telecommunication industries. )

Arguably, the best standards are those that don't try to over-reach, keep focused on the "what" as opposed to the "how."  But the best standards in communications are those which started early, set out the universe of discourse, and provided every participant with an unambiguous means of expression and representation.  So, I respectfully disagree with Geva on this one.

Thinking Out Cloud: Beware Premature Elaboration (of Cloud Standards)

There is another reason to avoid forcing a formal standardization process in cloud computing. It's simply not the ideal way to create a standard. There are two kinds of standards: de jure and de facto. Both phrases are from the Latin, the former meaning "by law" and the latter "by fact" --or in other words, there are official standards and standards that were not defined formally, yet have become standards in daily practice.

Formal standards are the ultimate form of "design by committee." They require compromises and involve politics and ulterior motives. They also tend to be too slow and bureaucratic. They are not the optimal way to come up with the best solution the market needs for a particular problems. The marketplace is the best tool for that (and when I say "marketplace" I also include free open source). As a particular class of technology matures, it is actually good to have a number of competing approaches and let the users vote with their feet for the one that best suits their needs.

...

In summary, the correct approach to developing standards is to first let the marketplace converge on de facto standards. A formal standards body can step in at a later stage to tie up loose ends and manage a more stable process.

Sunday
Feb082009

Thinking about Technology Development, Darwinism & Intent

Kevin Kelleher at GigaOm has a bit of sobering rumination for a Sunday morning.  The question raised (but with the answer being an exercise for the reader) is the response of the tech industry to a significant worsening of the economy.  The answers vary depending on the stage of company, the nature of its market and size of its bank account.

One aspect that it raises is the nature of how new technologies get born and introduced.  What appears very clear to me, having been through a sufficient number of booms-and-bust in the tech world, is the darwinian nature of the industry.  There are the years of plenty (and plenty of hype) in which the costs of developing a new offer are modest and favor small groups, experimenting with new ideas.  During the same years of plenty, there are some substantial, truly big plays that are created with the necessary capital backing them.  This experimentation allows the nature of markets, the inevitable downturn and the change in the perceived demand to winnow the offerings.  (I keep remembering the dot.com era and some of the web-based businesses - both large and small - that were hailed as the paragons of the internet economy. I also recall a number of them as smoking craters.)

But what's also come home to me of late is the potentially imprudent adoption of  darwinian navigation through product definition.   The religious belief that rapid prototyping, agile software definition & development, or a new development underpinning (like Java in the 90's, or Ruby today) can automagically generate a sustainable offering is potentially fatal to a large number of its adherents.   It's great on the industry-wide level for producing a lot of innovative, unconventional offerings from which the survivors emerge and a large number of wannabes fail.  But, taken as the primary force for product/service definition, it leaves any individual adopter of these as simply another experimental "mutation" that has to run the gauntlet to determine the fittest.

It makes the questions raised in Kelleher's post a vital aspect of the development and planning process.  Intention or direction needs to be incorporated in large doses in order for the advanced (and somewhat miraculous) development methods and tools to truly generate a greater chance of survival and eventual success.

What If It’s Worse Than We Think?

...
Right now, an economic depression is still far from certain, but the possibility is real enough that companies will need to prepare. What will it mean?

At first, it will favor large, cash-rich companies like Google, Microsoft and Cisco, or any company that can finance itself through its own operations. Others with decent promise but weak cash flows might hope to be bought. VCs will be forced to trim portfolios. Companies that have cut staff to the bone will have to cut more, even at the risk of hurting future growth. But even healthier companies might see their cash flows dwindle over time.

For the past few years, most tech companies have progressed incrementally, tossing out a new feature or a new gadget and seeing what takes root. But an even tougher economy would demand harder questions, rethinking what a company does at the most basic level: Why is your company here? What is it offering and why would someone else want to pay for your stuff?
...

Friday
Feb062009

Application Delivery Networks, Cloud Interop and Metadata Ownership

Lori McVittie lobs in a great post about (cloud) interoperability and the pragmatics of application delivery networks.  [Note to reader: It's important to get past the obligatory marketing bumph for F5's products.] 

In considering interoperation of clouds (even between private clouds or within a single, highly dispersed organization), the meta-data associated with an application or content managed & processed by the application needs to be considered from a number of viewpoints:
- who "owns" the meta-data?
- on what basis might it be intentionally and "legally" exposed, shared, used by other participants?
- in which jurisdiction, at what point in time, must meta-data be processed/treated with respect to privacy, compliance, etc.?

I saw Alistair Croll's interview at Data Center Knowledge (run by the OTHER Rich Miller!!) earlier in the week, and had been thinking about the implications.  But Lori's already done the heavy lifting.  I like her treatment of the problem, based initially on security and delivery policies.  I consider it a start on a list of additional types of policies which will either suffer or be worked so as to accomodate the mobility of application workloads across the InterCloud.

The part I'm itching to ask her about ... or start a more open conversation: the possibility of "a specification regarding application network delivery metadata" which, if properly (??) abstracted and generic, could "allow the meta-data policies to be transported and applied across
different cloud implementations while preserving the specific details
of implementation within the cloud computing infrastructure."  Whoa!! Tall order, isn't it?  What does it imply we've done with respect to a standardized representation / standard semantics of peripatetic workload computing? (Sorry... couldn't bring myself to say "cloud" again.)

Update:  Lori's started a very cogent, readable response to the questions raised above.  I recommend you check out her post.

Who owns application delivery meta-data in the cloud?

Once the application delivery network is tuned to deliver an application it essentially becomes a part of the implementation; it becomes a necessary component of the application without which security and performance can degrade. If the application is to be moved from one cloud to another, the security and delivery policies need to move with the application in order to ensure that neither security nor performance of the application is compromised.

But as Alistair Croll points out in this interview at Data Center Knowledge, the question of who owns meta-data may prevent this from becoming reality. Like the popularity of a picture on Flickr, the ownership of application network infrastructure meta-data (the security and delivery policies) is highly in question.

...

So if the application delivery network is such an integral piece of a cloud computing provider's infrastructure, it seems unlikely they'll be willing to share the relevant meta-data with other cloud computing providers, driving complete interoperability and portability efforts to concentrate simply on application infrastructure.

...

It is possible that if a specification regarding application network delivery metadata were abstracted and could be applied across application delivery network implementations, that the "secret sauce" of a cloud computing provider's offering could be maintained while still allowing portability across cloud implementations. Such a generic specification would allow the meta-data policies to be transported and applied across different cloud implementations while preserving the specific details of implementation within the cloud computing infrastructure. The choice of application delivery infrastructure would remain an integral differentiation for cloud computing providers as each implementation of the metadata would remain specific to the infrastructure provider and therefore be better or worse depending on the implementation.

But as Alistair pointed out, the real question right now is who owns the meta-data? If the answer is the cloud computing provider, then even attempting to formulate such an interoperability specification that bridges application delivery infrastructure implementations seems as though it would be a wasted effort.

Page 1 ... 7 8 9 10 11 ... 78 Next 5 Entries »