Monday
Sep082008

Cloud Parsing

More clarity (and practicality) is emerging in the conversation regarding cloud computing. Joe Weinman's response to my previous post goes straight to the heart of it: there are reasons why the pure, platonic form of cloud computing just won't satisfy the requirements or live within the constraints placed on corporate production computing.

Dan Woods has a very good article this morning at Forbes.com in which he identifies three issues that will bring complexity to the cloud-based solutions. He identifies them as (1) governments, (2) network topology and (3) quality of service.

With all respect to his choices, I'd probably use different terminology when laying out the issues for a geekier audience. (The column IS entitled JargonSpy !!)

(1) government regulation regarding the jurisdiction in which certain kinds of data must remain is a big issue. But there are a whole slew of industry standards (such as PCI DSS) and just "best practices" that recommend keeping close watch on where data lives and where it's processed. Compliance with regulation, industry standard or corporate best practice is a more inclusive concept.

(2) the network topology issue is really about latency rather than speed. (See Weinman's Cloudonomics Rule #8)

(3) quality of service is a loaded term for those of us with networking backgrounds. Woods' use of the term makes sense to the general readership. In fact, it's an amalgam of service properties that seem to rest primarily on reliability, availability, performance and security. Thrown in for good measure, we need to consider connectivity and resilience of the cloud.

This article is heartwarming, in that it starts to add some texture and the appropriate measure of sophistication into the thinking around cloud computing. Thanks, Dan.

(Thanks to OnSaas for pointing out the article.)

Parsing The Cloud - Forbes.com

Cloud computing is a rich vein of semantic ore for the JargonSpy because so much that is said about the cloud makes it all seem so simple. Most of the time, the story goes like this: We have an application like Salesforce.com or Google Apps, or an application programming interface to a service like Amazon.com's EC2 or S3, and we ask it to do stuff for us. Then, out there in the cloud, it all happens. We don't have to worry about what happens in the cloud, and we do not really care where it is, who else has their stuff there or how it all works.

The days of not caring are quickly coming to an end. The cloud as an abstract entity in a place you don't have to worry about will be replaced by clouds that have geographies, special purposes, other companies and rules that guarantee compliance with regulations. This week JargonSpy takes a look at how and why the cloud will transform from the simple to the complex. ...

Sunday
Sep072008

Cloudonomics -- and its Ten Laws

Gigaom has a put out a piece by Joe Weinman, a Solutions Sales VP at AT&T Global Business Services. I enjoyed the laws, and I particularly liked Laws #2, #3 and #8.

I do, however, take some exception to the sentiment at the beginning of the piece. It feels as though the argument discussion begins with a presupposition: there's "good" cloud computing and "bad" (or at least, badly executed) cloud computing ... and it's all based on whether you use a utility service or decide to run your in-house datacenter as though it were a utility. There are advocates for either side, and they seem all too willing to take after one another with axes.

I'm not convinced there needs to be a dust-up about public utility clouds and private clouds. They have their respective advantages and disadvantages. What seems to be missing from the argument is the notion of spanning or cloudbursting. There's every reason to think that the benefits of cloud computing / utility computing are available to the organization that runs its "cloud" in a corporate datacenter, but makes use of the utility cloud when and if it's needed. This could be for scaling out an application or scaling it up (to use functionality unavailable within the corporate datacenter) for so long as the resource or functionality is necessary.

So why (other than for sales positioning) does the end-user need to pick one or the other? It seems logical (and practical) that for many enterprises, the hybrid approach will be necessary -- some while in transition, others permanently.

Let's stipulate that both the public compute utiltiy and the private enterprise cloud have their merits. And, furthermore, that the appropriate recipe for a specific enterprise may require a mix of both.

There, now. That wasn't so hard, was it?

The 10 Laws of Cloudonomics - GigaOM

Public utility cloud services differ from traditional data center environments — and private enterprise clouds — in three fundamental ways. First, they provide true on-demand services, by multiplexing demand from numerous enterprises into a common pool of dynamically allocated resources. Second, large cloud providers operate at a scale much greater than even the largest private enterprises. Third, while enterprise data centers are naturally driven to reduce cost via consolidation and concentration, clouds — whether content, application or infrastructure — benefit from dispersion. These three key differences in turn enable the sustainable strategic competitive advantage of clouds through what I’ll call the 10 Laws of Cloudonomics. ...

Monday
Sep012008

Watching Microsoft Positioning its V12N Offers

Today's a holiday that acts as a seasonal bookmark and starting gun. For all intents and purposes, everyone's back from the summer holiday, and about to kick into high gear. And with VMworld 2008 coming up in two weeks, we can all figure on getting bombarded with announcements from the ecosystem that relies on VMware.

Then, there are the alternate universes built around Xen and, as a universe unto itself, Microsoft. On Sept. 8, Microsoft is sponsoring a (re-)launch event that's clearly designed to steal some thunder. Here's CIO.com's take on it. The aspect that caught my eye is the emphasis on management of virtualized infrastructures ... manageable with "the same tools you're already using for your physical infrastructure."

Does anyone with experience in putting together a working server virtualization project actually believe that statement?

In the VMware ecosystem, a recently published (vendor-sponsored) survey reports that for infrastructure reporting, 35% use the same tools as as the physical environment, while 22% use VMware's management system (VirtualCenter), and 2% a third-party's solution. (What's not clear is how many of these are in-house "experiments" and how many are mission-critical deployments. ) What we will see this fall are announcements from a wide range of players who want to "fill the virtualization management gaps" in the VMware ecosystem. (Replicate will be no exception. We've staked out our part of that territory!)

But, what does this claim mean when uttered by Microsoft regarding Hyper-V? It suggests a time-honored Microsoft business model: Sell the hypervisor at a very low (give-away ?) price, then incorporate the requisite enhancements and functionality needed for Hyper-V into rather costly management systems. This approach has certainly worked for MSFT in the past as they addressed the corporate IT market for database, application, and workplace collaboration systems.

It makes me wonder just how open a marketplace will exist for Hyper-V infrastructure management.

Microsoft Starts Virtualization Hype Blitz - CIO.com - Business Technology Leadership

...
The question is whether Microsoft's content is worth the time and attention.

In general, the answer is probably yes. Microsoft's virtualization software still doesn't compare to VMware's, according to most of the experts I talk to, but it's much closer than a major Microsoft product could be expected to be at this stage of its development.

Even Microsoft can't hold center stage just talking about a hypervisor that's already been released, though. Even offering exclusive or semi-exclusive interviews with rarely accessible top Microsoft execs—which Microsoft is currently doing with both Kevin Turner and Bob Muglia—won't guarantee the amount of space needed to affect the potential impact of VMworld.

So Microsoft's expanding to take on the rest of the virtualization universe as well. The event materials it posted and distributed to the press say the company will roll out new products designed to build virtual infrastructures "from the data center to the desktop," that are manageable with "the same tools you're already using for your physical infrastructure."

Saturday
Aug302008

Mind the gap - Corporate IT Management Shortcomings

Yes, the survey was sponsored by a company that has a vested interest in the result. And, yes, the press release is designed to make you shake your head in shocked bewilderment. That doesn't mean it's not accurate. The management challenges "arising from hybrid physical and virtual infrastructures" are for-real problems -- arguably sufficiently scary enough to be a real barrier to the adoption of virtualization for production computing in a large number of corporate datacenters.

NetIQ: NetIQ Virtualization Survey Results Reflect Lack of Systems and Application Management Basics

HOUSTON – As the adoption rate of virtualization technology increases, organizations face new management challenges arising from hybrid physical and virtual infrastructures. While companies turn to virtualization to reduce IT expense and increase service capacity, a recent study conducted by NetIQ Corporation, an Attachmate business, revealed that very few companies are taking the necessary steps to extend systems management basics to ensure application performance, service availability and end user experience across this complex hybrid environment. As a result, they risk offsetting the many benefits and ultimate cost savings virtualization technology promises.

Comprised of feedback from over 1,000 respondents within more than 800 different government, enterprise and small-to-medium organizations worldwide, only 21 percent of 759 respondents currently deploying virtualization have any kind of systems management solution for their virtual infrastructure. Overall, survey responses demonstrate that:

* Approximately 27 percent are managing the performance and availability of their virtual systems with the same tools they utilize on their physical servers;
* Just 17 percent are simply monitoring the virtual hardware or the operating system; and
* Only 10 percent are proactively gauging end-user response time while 15 percent are simply considering it.

Monday
Aug182008

Clouds on the inside

Ken Oestreich has a good post today, calling out those who would limit the notion of "cloud computing" to the use of an external service. I concur 100 percent.

At least one defining issue for cloud computing is the way in which existing management systems are marshalled in order to provide safe, isolated use of "raw resources", in conjunction with those shared services (such as Active Directory, LDAP, DHCP, ...) that are established as the basis for service management and service levels. A simpler notion may be simply that any data center, run internally or externally, can offer "cloud" computing based on a utility, "self-serve" operational model.

Clearly, this applies as well to the form in which BOTH internal and external resources are used in concert --- when the "internal" cloud requires additional resources from an external service cloud in a scale-out or scale-up situation. At Replicate, we've often referred to this as "spanning". The folks at the 451 Group seem to like the term "cloudbursting." Whatever you call it, the internal-external distinction as the definition is a trap at best, and should be relegated to the class of thinking known as "lazy."

Fountainhead: Creating a Generic (Internal) Cloud Architecture

I am simply trying to challenge the belief that cloud-like architectures have to remain external to the enterprise. They don't. I believe it's inevitable that they will soon find their way into the enterprise, and become a revolutionary paradigm of how *internal* IT infrastructure is operated and managed.

With each IT management conversation I've had, the concept that I recently put forward is becoming clearer and more inevitable. That an "internal cloud" (call it a cloud architecture or utility computing) will penetrate enterprise datacenters.
...

So here is my main thesis: that there are software IT management products available today (and more to come) that will operate *existing* infrastructure in a manner identical to the operation of IaaS and PaaS. Let me say that again -- you don't have to outsource to an "external" cloud provider as long as you already own legacy infrastructure that can be re-purposed for this new architecture.

This statement -- and associated enabling software technologies -- is beginning to spell the beginning of the final commoditization of compute hardware. (BTW, I find it amazing that some vendors continue to tout that their hardware is optimized for cloud computing. That is a real oxymoron)