Saturday
Dec152007

Componentizing IOS!! But ... when?

Cisco's reorganization of the development group is truly significant.  If the company can actually commit to this self-administered genetic engineering ... no mean feat ... they have my admiration.   I spent a day or two this past week thinking about it, and wondered what I would consider a clear indicator of commitment to the new path.  Well, folks, this is it... opening up IOS.

This is a game-changing move for the company, but only if it's a timely move.  How quickly will they do it? How quickly can it be done?

Cisco opening up IOS - Network World

... "It's a significant step forward for us," said Don Proctor, senior vice president of Cisco's newly formed Software Group, at last week's C-Scape 2007 analyst conference. "Software turns out to be a key way that we can do what [we've] been talking about for some time, which is link business architecture to technology architecture in a meaningful way."

Cisco plans to "componentize" IOS – developing only one implementation of a specific function instead of several, depending on the image – dynamically link IOS services and move the software onto a Unix-based kernel. Cisco then plans to open up interfaces on IOS to allow third-party and customer-developed applications to access IOS services.

However, no timeframe for doing so was provided.
...

Powered by ScribeFire.

Wednesday
Dec122007

Fluidity, integrity and security

James Urquhart points out an interesting aspect of the conversation I've had with Greg Ness about network security and network integrity in a virtualized environment.  I got on my soapbox about network integrity, primarily to call attention to the issue, and (as James correctly assumes) not without a healthy dose of respect for the importance and role of application architectures.

As we (at Replicate Technologies) drilled into the issues of application safety in the virtualized environment, we've had the opportunity to speak with a number of application architects, VME administrators, and a few (just a few) IT managers responsible for network operations.   What's striking is the fact that even though VMware goes out of its way to simplify virtualized network issues (by limiting the power / complexity of the virtual switch that you'll find in every ESX box), many of the application folks tend to use only the most simple approaches.  They work, but with some severe limitations on the safety and scalability of the VME installation. 

So, to James' observation about my over-emphasis, I'd say: You're correct. I don't believe that one relies on infrastructure to keep the whole application running as the VMs comprising that application are moved around the network. My emphasis comes from the realization that network integrity -- getting the infrastructure "right" -- is a necessary requirement, and one that the application architects seem often to ignore when considering applications living an a virtual machine environment.  It's necessary, but certainly not sufficient.

In a conversation this week with my co-conspirators at Replicate, Rich Pelavin recalled Saul Steinberg's famous New Yorker magazine cover of the world as seen from Ninth Avenue.  If you ask the application architect to map out the implementation, there are lots of servers, multiple interacting processes, each with exposed ports and interfaces, and they all connect with one another (and the rest of the 'net) by connecting to an undifferentiated network represented as "the cloud."  Treating the network as a dumb pipe is not just counter-productive in a virtualized environment... it's dangerous. 

(To be fair... the network architect and IT guy responsible for network management sees the world as a lot of network appliances and elements interconnected at specific ports.  They may not even recognize the existence of the server hardware, much less the applications!! )

As for James' request to hear about what Replicate Technologies advises its customers about application architectures ... stay tuned.   We're focused on delivering network integrity -- that is, network integrity that applies to the combined virtual and the physical networks. That necessarily implies that we employ a means of considering applications as "VM flocks" and providing the means of operating on them as a unit.  I'll leave it at that for now.

Service Level Automation in the Datacenter: Software fluidity and system security

... Here is exactly where I believe application architectures are suddenly critical to the problem of software fluidity. In a well contained multi-tier application (a very turn-of-the-millennium concept) it is valid to consider the migration of the "flock" as a network integrity problem. However, when it comes to the modern world of SOA, BPM and application virtualization, suddenly application integrity becomes a dynamic discovery issue which is only partly dependent on network access.

In other words, I believe most modern enterprise software systems can't rely on the "infrastructure" to keep their components running when they are moved around the cloud. Its not good enough to say "gee, if I get these running on one set of VMs, I shouldn't have to worry about what happens if those VMs get moved". Rich hints strongly at understanding this, so I don't mean to accuse him of "not getting it". However, I wonder what Replicate Technologies is prepared to tell their clients about how they need to review their application architectures to work in such a highly dynamic environment. I'd love to hear it from Rich.  ...

Powered by ScribeFire.

Wednesday
Dec122007

Security 3.0 and the Perimeter Myth

Greg Ness regarding the myth of security at the perimeter.  Continuing the story about how we really need to concern ourselves with VirtSec and  "the soft middle", and not just the perimeter.

Security 3.0 and the Perimeter Myth | AlwaysOn

Over the last few weeks I’ve been talking to analysts and security pros about virtualization, security and the evolution of netsec to virtsec. Last week I was in Los Angeles on a virtualization panel at the InformationWeek Virtualization Summit and then in NYC on a MISTI panel on virtsec.

As a result of several discussions, I’ve come to the conclusion that for many organizations their network really doesn’t have a perimeter, at least in the classic sense of defense. The idea of a strategic point of defense that protects what is inside has become a legacy myth, an anachronism from the early days of netsec and fame-seeking hackers.

...

THEN WHAT'S NEXT FOR NETSEC?


In the short term the netsec hardware
vendors MUST announce a virtsec product in 2008. Being late to the
party will cost them substantial vision and revenue growth points. As I
commented before, these 2008 virtsec announcements will likely be vapor
ware because of the substantial difficulties in moving from signature
processing (usually ASIC) “architecture crunch” to massive hypervisor
footprints. Maybe these products will be broken into multiple parts in
order to lessen the load on individual servers and avoid massive
processing burdens. Maybe they’ll find a creative way to exploit the
hypervisor layer from afar? Either way, they are in a world of
computational disadvantage until they understand the nature and
weaknesses of the applications they are defending. ...

Powered by ScribeFire.

Monday
Nov192007

Wired Scenes -- Netsec and Virtualization

Greg Ness continues to develop a narrative that hits close to home.  It's clear that his lunchtime conversations with Allwyn Sequeira get into the "deep tech."

Greg and Allwyn arrive at some interesting conclusions with respect to network security and virtual machine migration.  In response to a comment on his Archimedius blog, Greg characterizes the idea of limiting VM movement to specific security domains as a short term fix.  In a sense I agree, but my consideration of the long term brings me to the conclusion that the solution lies in policy implemented by "smart" constraints.

Greg hits one key issue when he states:

There is also the issue of lock-stepping server and network policies, at a time where virtualization is enabling more responsiveness.

Right.   But, what if the independently established network policies and the independently established server requirements could be reconciled? What if we could identify a solution that satisfies the requirements of both, a workable compromise?  That's an approach we're pursuing at Replicate Technologies, the company I officially joined last month.

Consider this:  It's not only network security, but also network integrity that must be maintained when supporting the group migration of VMs.  If one wants to move an N-tier application using VMware's VMotion, one wants a management framework that permits movement only when the requirements of the VM "flock" making up the application are met by the network that underpins the secondary (destination) location.  By that, I mean:

  • First, the assemblage of VMs need to arrive intact. 

If, because of a change in the underpinning network, a migration "flight plan" no longer results in a successful move by all the piece parts, that's trouble.  If disaster strikes, you don't want to find that out when invoking the data center's business continuity procedure.  All the VMs that take off from the primary location, need to land at the secondary.

  • Second, the assemblage's internal connections as well as connections external to the "flock" must continue to be as resilient in their new location as they were in their original home. 

If the use of VMotion for an N-tier application results in the a new instance of the application that ostensibly runs as intended, but is susceptible to an undetected, single point of network failure in its new environment, someone in the IT group's network management team will be looking for a new job.

The "containers" I'm speaking of are both security containers and network integrity containers.  In order to be useful in a production computing environment, these containment areas must have identified permeabilities ... connections with peers, customers, and providers.   

I'll take the opportunity in the coming weeks to write more about the nature of the containers we (Replicate) foresee putting into place.  In the meantime, be assured that, like Greg and Allwyn, the last thing we want to put into place is a brittle, overly restrictive sandbox that incarcerate VMs.  In addition, we want to make best use of VMs and VLANs, while simultaneously restraining any VM under management from making VLAN connections that would compromise network security.

Virtsec in the Trenches | AlwaysOn

...

LIMIT VM MOBILITY TO WITHIN SECURITY DOMAINS

At my Archimedius blog one of the visitors shared his strategy, which involved limiting VMs to movement within specific security domains. While this is a smart short term step, over the medium and long term it still represents limiting the infrastructure agility enabled by virtualization at the heart of the business case. There is also the issue of lock-stepping server and network policies, at a time where virtualization is enabling more responsiveness.

Even if you constrain movement the VMs behind well-tuned intrusion protection systems and firewalls may still be vulnerable. Unpatched instances can appear minutes after signature tunes or vulnerability scans. As I mentioned in The Beginning of the End, the heightened flexibility of virtualization introduces change factors that hasten the obsolescence of any static security measure. You may limit VM traffic within security domains, but you still have the issue or percolating vulnerabilities within each domain microcosm. Adding more domains means more management and observation resources and less flexibility.

RE-ARCHITECT THE NETWORK

A similar notion is to architect the network around security (and other constraints). This may be tolerable in the short term, but over time you could offset some of the benefits and flexibility of virtualization by constraining traffic to cordoned off VLANs. In his NGDC deep dive presentation (which inspired this blog series about) Blue Lane CTO Allwyn Sequeira notes the eventual outcome “VLAN spaghetti”, a term used long before virtsec. Taking it a step further you get heightened complexity and constrained mobility.  ...

Powered by ScribeFire.

Sunday
Nov182007

MAC Attacks and Disguise

When I started reading this, I thought it was going to go in a completely different direction... something akin to providing VMs with a unambiguous name/identifier that would potentially ease some of the burdens of VM management.  Whoa... was I wrong on that one.

Kutz posits that in order to defend VMs from malicious attacks, administrators might disguise the VM by establishing a disguise -- a MAC address of a type of server other than what it really is.  This, he posits, would make it less amenable to programmatic attacks.  Well... that might be the case, but it raises the other issue of VM management, administration and discovery by legitimate third parties.  It also would place a distinct burden on VM management systems (such as VMware's VirtualCenter) to support this kind of disguise without, itself, getting confused about what kind of device is sitting out on the network.

Return of the MAC — Server Virtualization Blog

... Virtualization vendors also produce Ethernet adapters — virtual network interface cards (NICs). Most VMs would be rather useless if they could not access some sort of network, so virtualization vendors must create virtual NICs in order for the VMs to get on the big wide world of Webs. And since these virtual NICs have to participate on the network just as if they were physical, they must use MAC addresses. Because the first 24 bits of these MAC addresses, the OUI, is organization-specific, there is a real potential for network administrators to detect not only if a machine on the network is virtual by its MAC address, but also what type of virtual machine it is (what vendor’s software is hosting it). While best practices dictate that you do not change the MAC address of VMs, enterprise virtualization solutions do present this as an option, and, because of this, here is the scenario I see occurring.

One way to harden the Apache Web server is to use mod_security to alter the Web server’s signature. For example, you can fool clients into thinking that the Web server hosting their favorite videos is actually a Microsoft Internet Information Systems (IIS) 5.0 server instead of Apache 2.2. Administrators do this in order to fool attackers into attempting the wrong types of attack vectors. Even though best management practices dictate that administrators NOT alter their VMs’ MAC addresses, I forsee them doing so anyway in order to fool would-be hackers into attempting the incorrect attack vectors on VMs. For example, if a VM is hosted on ESX and its MAC address has an OUI registered by Microsoft, then a would-be attacker may try known Microsoft Virtual Server or Hyper-V exploits on the VM instead of ESX exploits.

Who knows? Twelve months from now altering a VM’s MAC address to be that of another vendor may be considered a best practice, but right now, with the already complex problem of managing virtual hardware, IT administrators are best served to leave their VM MAC addresses well enough alone.

Of course, that doesn’t stop the idea from being completely and utterly cool!

Powered by ScribeFire.