Server virtualization... it's business critical

What more can I say?  Server virtualization is definitely not a flash in the pan.

Server virtualization rated as business critical by majority of large organizations, says TheInfoPro | Tekrati Research News

TheInfoPro today announced that over two-thirds of users view server virtualization -- now the second highest priority among Server professionals -- as critical to achieving their business objectives, with another 28 percent viewing it as valuable. Ninety percent of Server professionals view virtualized servers as the next enterprise IT server platform. Further, they believe that hosting applications on their own physical servers will become the exception.

“There are many factors driving this continued shift to a virtualized server environment, but the primary motivator indicated for F1000 organizations is cost-savings, followed closely by server sprawl, and consolidation,” said Bob Gill, TIP's Managing Director of Server Research. “Though cost savings will still be considered important, by 2010, as organizations catch up with the current pace of virtualizing environments, this line of thinking will give way to other drivers being considered primary benefits, such as dynamic provisioning and disaster recovery.”

According to TIP's study, this relentless expansion of virtualization and growth in new applications has given way to a continuous increase in server software spending, with 50% of organizations indicating increased spending. In fact, Server pros are indicating that, in some case, over $5M is being spent on these products yearly.

Powered by ScribeFire.


Ping and pray

This snippet from Virtual Strategy Magazine, courtesy of Tarry Singh.  The point of the post is that while migration (hot, warm or cold) is one of the blessings of server virtualization, it's not sufficient to use conventional approaches to failover.  The first point is particularly noteworthy: if clustering and failover in the virtualized data center require serious investment in manual configuration, setup, scripting and testing, we've lost a good deal of the benefit. 

Virtualization For Everyone: Marathon Technologies on Virtualization and High Availability

But protecting virtual environments from unplanned downtime is a different matter. In many cases, virtual environments employ traditional clustering and failover techniques, which use rudimentary heartbeat pings to check the status of a virtual machine. This approach suffers from several drawbacks:
  • Clustering and failover add cost and complexity to the environment, requiring manual configuration, setup, scripting and testing to define the appropriate actions to take in case of failures. This additional administrative complexity can introduce errors, contributing to availability issues.
  • Heartbeat pings are unable to reliably determine the health of a virtual machine and may not distinguish between I/O path failures, server failures, and lack of system resource. In some cases, these limitations may result in unnecessary or false failovers. In other cases, discrete storage or network device outages are not identified as failures and the system does not fail over.
  • The failover process is far from certain; it assumes that the administrator has configured the standby system appropriately for the application and has maintained that configuration. If the target system is not configured appropriately, then when a failover does occur, the application or virtual machine is inoperable on the standby system, causing a "failed failover." Given the sense of uncertainty, some refer to this approach as "ping and pray."

Technorati Tags: , ,


Grid Gurus

Rich Wellner, my friend and former colleague at Univa UD, has started editing a blog called Grid Gurus. He's got a very talented group of people contributing, including Ian Foster, Tim Freeman, Sinisa Veseli, and Scot Koranda.  The blog already has a nice collection of themes developing ... and I can't wait for the next 434 parts of "Better Know a VM."

Grid Gurus: Why the grid is still important

Grid computing is celebrating 11 years next month, and is poised to become increasingly mainstream in the coming years. There are a number of reasons that this is true, and most of them are the time tested ideas that have been proving themselves in your research institutions and businesses for years. The grid is about allowing your organization to run more efficiently and more effectively than can be done with more conventional technology solutions. It's about bringing many machines together in coordination around a task. It's about bringing data storage and movement to bear in a coordinated fashion with your application. It's about allowing people from different parts of your organization to work together more easily.


Gartner advises IT execs for 2008

Although it comes as no surprise that Gartner's top 10 for 2008 made sure that virtualization showed up on the roster, David Marshall points out that virtualization was closely related and an enabler for many of the other items that show up on the list.

David Marshall | InfoWorld | Gartner's Top 10 for 2008 - Outlook Virtual and Green

... When talking about Gartners top 10 list, Hillier made an interesting observation when he said that virtualization has become the unwitting enabler for many of the top 10 items listed by Gartner. And he added that many of the top 10 issues didn't even exist when virtualization was first conceived, yet its advantages in efficiency and flexibility make it an obvious choice to deal with many of them.

When asked about virtualization specifically, Hillier noted that "Firstly, virtualizing the 'low hanging fruit' is completely different than tackling the complex mission critical systems that reside deep in the data center. This will cause organizations to place more rigor on the planning and execution of these initiatives, and to integrate this more seamlessly into the long-term management of the environment. In addition, virtualization is only truly transformational if the paradigm is embraced fully, and quick-and-dirty implementations will leave you with a 'virtual physical' environment, that looks and smells like a physical environment but takes up less rack space. This may be expeditious, and indeed is warranted in certain situations, but it completely undersells the potential of the technology." ...


Netsec and Virtsec

Greg Ness of Blue Lane has a great post at AlwaysOn. Somehow, he manages to hit almost all of those strongly held points of view I hold regarding the impact of virtualization on data center networks.  Among the most relevant: 

  • Virtualization changes the game for network security.
  • The mobility of virtual machines on a network revisits a scenario in the data center that was last seen when end users started plugging dumb WiFi access points into corporate LANs.
  • Fabrics replace pipes in the corporate network.
  • More proof (?) that attending to perimeter security only will come back to bite you.  One has to consider the "threat from inside", and the problem of the "soft middle."
Nice job, Greg.

Netsec and Virtsec: Weird Scenes inside the Gold Mine | AlwaysOn

Virtualization will Disrupt Security

The next reason is virtualization. While virtualization has become widely known for energy savings and data center consolidation, its power to increase the flexibility of an IT organization has been undersold. While Wall Street and a handful of companies now get it, I think the network security world is in the process of being shocked into submission. A recent Pacific Crest report predicts the virtsec market will reach $1-$2 billion in the next 3-4 years. Yet the netsec vendors are notably absent with any real products.

Many of the netsec experts are just starting to realize that virtualization is about to turn the hardware game upside down and drive even the most successful appliance vendors to convert their hardware into software appliances. While editors and pundits wax and wane about power and real estate savings and whether virtualization is more or less secure (than physical infrastructures), a much deeper fundamental shift is about to take place and pull the rug out from under the netsec hardware ecosystem.

Servers Going Mobile

By its very nature virtualization decouples hardware from operating system and application. A hypervisor platform is the equivalent of a new and very powerful data center operating system that allows servers to be created, saved, reverted to an earlier version and moved online and offline and across various host servers, all at the click of a mouse. Compare that to the world of racks of custom hardware and approval processes typically required to make moves or changes.

By decoupling software from hardware, virtualization is putting in place the preconditions for a massive shift in the network appliance business, from application delivery to network security. We’re about to see data center [racks of specialized custom chips sitting inside heterogeneous panoplies of tin-wrapped circuit boards and wandering cables] convert into uniform racks of powerful blade servers. The world of servers defined by operating system and applications will become the world of virtual servers (virtual machines or VMs) directed by mouse click across processors, hardware or even an entire data center. ...

Technorati Tags: , ,