Attention Span 09.07.25


3-D projection on German building created by UrbanScreen


Relax, Bloggers. The AP Isn’t Out to Get You

The level of nervous blogging and twittering following the New York Times article , including Jeff Jarvis' How (and why) to replace the AP , has been pretty extreme. I was reassured to see Ryan Chittum's post in the CJR , which he wrote after actually going to the source... the AP. Now, I'm sure there are those in the blogosphere about to accuse the AP of spinning and damage control, but I'm tempted to take to heart this quote from a Senior VP of product development at the AP:

“Are we going to worry about individuals using our stories here and there? That isn’t our intent. That’s being fueled by people who want to make us look silly. But we’re not silly.”


Russian Programmers Petition Russian Government for a Holiday

Russian programmers have turned to the Administration of President (of the Russian Federation) Medvedev with a request for the establishment of professional holiday... Day of the Programmer. September 13 (the 256th day of the Gregorian calendar) is being promoted as the official Programmers Day. (Translated article from C-News )



Hoff Kicks Up Dust with a Security API for Cloud

In May, Christofer Hoff wrote a very interesting post about one of the unintended consequences of enterprise use of cloud services ... the serious and costly impact of right to audit clauses in the service contracts and their impact on *aaS. He posits that the enterprise customer now does make much heavier use of the RTA. So distinctly different is the level of use, that it's generating exceptionally high costs for the *aaS provider.

The sense I get (from Hoff, the comments, and from speaking directly to cloud service providers) is that the RTA clauses have been just that... clauses in a contract, with very little actual planning and preparation by the individual service provider. Thus, every invocation of the right ends up being a one-off project, distracting the provider, and diverting precious engineering and operations resources. (At the time, I commented that it sounds like a commercial opportunity. )

Yesterday, Hoff posted an extension to the concept, suggesting a security API for Cloud Stacks. I was 'off the grid' most of the day, but discovered when I reconnected that it generated a huge volume of twitter traffic. The post posits a sensible consideration: compliance structure commonality suggests that they could be 'built in' to a security control model. The result would be open API(s), which would be (voluntarily) implemented by the providers of *aaS stacks. This would provide for an open architecture for scanning a provider's offering for network vulnerabilities, as well as configuration management, asset management, ...

This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.

I haven't yet waded through the resulting twitterstorm, but it's formidable. It also gets into some strongly held, loudly presented 'religious' positions from a number of the security twitterati, which I don't pretend to follow comprehend.

This is a brilliant suggestion that's operationally difficult to get implemented over the broad range of *aaS providers. The pragmatics of getting this kind of adoption starts with getting agreement on a security automation regimen (such as SCAP). Having once figured which flavor of security automation, it seems that there will be a requirement for a trusted (open) third party -- a community effort or a vetted commercial ecosystem -- that prosecutes this through the 'cloud community.'

I expect some substantive discussions within the CSA community, and look forward to it as an interested observer and (potential) beneficiary. I'm really interested to see who picks up the mantle. Given the importance of BOTH the necessity of improved cloud audit and an open security management regimen, I hope that the Feds jump into this as a constituency with a real, economic need. If the government's production use of *aaS is as important as they seem to indicate, this should be given big-time attention by the country's CIO.


PCI DSS Wireless Data Guidelines? Not so much.

PCI Security Standards Council Issues PCI DSS Wireless Guidelines

The retail industry has, for some time, been exercised about cardholder data being extracted feloniously from wireless networks. But, apparently they won't get much in the way of additional reassurance or a path to safety with the PCI SSC Wireless Guidelines which were issued last week by the PCI SSC.   

“It contains no changes to the PCI standard at all and the only thing really interesting about it is that they felt the need to issue it,” said David Taylor, founder of the PCI Knowledge Base.


'Security by Compliance Is No Longer Working.' Did it ever?

A number of people much smarter about data security than I have often made the point that one has to distinguish between passing a compliance audit and actually being secure. It reminds me of an education system that places so much emphasis on passing a competency test that the material being "learned" is completely secondary.

So, when I see reports of presentations like this, it makes me sad. It also makes me concerned for those who have their personal or corporate data protected by organizations focused on 'passing the test' as opposed to 'absorbing the material and putting it into action.' The point that Pironti makes in the presentation SHOULD be obvious.

If organizations continue to focus on security by compliance, he argues, the adversaries will continue to win as their attacks become more effective and more damaging. “Compliance can be a good starting point for securing information infrastructure and data if an organization has not put anything in place previously, but it cannot be the end point of the conversation.”

However, I'm not even sure what he means when he goes further to state that "(w)e need to stop thinking about information security and start thinking about information risk management.” Then there's

“The technology is just a vessel for the data and has little value by itself. By focusing on the data, enterprises will be better prepared for the challenges that they may face from any adversary”

We should always be sure to consider that the 'map' is not the same as the 'territory.'


Dealing with Data During Cloudbursts

I enjoy reading Joe Weinman's posts. And today's post at GigaOM is no exception. He does a great job organizing the problem of data when considering the architecture of cloudbursting. Joe's post has prompted me to break my 'radio silence' of a few months.

In 4 1/2 Ways to Deal With Data During Cloudbursts, Joe calls out a number of architectures and discusses some of the considerations that impact the choice of a relevant scenario. I won't try to recreate the architectural strategies, but I was taken by how relevant these same constellations are when addressing some of the more advanced considerations of data clouds, peripatetic workloads and data governance.

1) Independent Clusters: This one is pretty straight forward and Joe's characterization of "minimal communication and data-sharing requirements between the application instances running in the enterprise and cloud data centers" makes sense. The data-specific considerations in using the cloud service resources tend mostly to center about providing the user with a uniform (or at least acceptable) standard of data security.

2) Remote Access to Consolidated Data: This strategy is called out for those situations in which application instances running in the cloud require access to a single-instance data store, or data store(s) which must for various reasons remain within the confines of the enterprise data center.

Notice my 'or' in the last sentence. Besides architectural requirements that require a single-instance data store, the reality of enterprise IT is that data stewardship requirements often require the authoritative datum to remain within the enterprise data center.

3) On-Demand Data Placement: Weinman points out that

...if I/O intensity and/or network latency are too high for remote access, then any needed data that isn’t already in the cloud must be placed there at the beginning of the cloudburst, and any changes must be consolidated in the enterprise store at the end of the cloudburst. The question is: “How much data needs to get where, and how quickly?”

This is clearly the right question to ask first. If a large data set is required to be in close proximity to the cloud service application instances, this may require enterprise IT to rely on a number of tactics to reduce delay in commencing cloud-based operation: large bandwidth networking services, possibly made available on-demand; advanced WAN optimization technologies (e.g. data deduplication).

As in my consideration of the remote access to consolidated data, on-demand data placement may imply a requirement for additional measures to deal with compliance and data stewardship, therefore calling on the purveyors of fast file transfer or on-demand, adjustable data transport services to offer a form of 'in-flight' data mediation services. Alternatively, the enterprise data center may be called on to implement dataset virtualization approaches or data masking systems in order to remain in compliance.

4) Pre-positioned Data Placement: He makes the point that pre-positioning "... adds additional costs as a full secondary storage environment and a metro or wide-area network must be deployed.'

4.5) BC/DR Plus Cloudbursting: This was the point at which I chortled with recognition.

Thanks, Joe! I've been looking for the context in which to make this point for years. This has been a soapbox of mine for a long time ... almost since the notion of utility computing (now 'cloud computing') started circulating as a meme.

In addition to using cloudbursting as the premise on which to incorporate business continuity and disaster recovery costs into calculation, I'd like to throw in at least one more, in hopes of getting this to 4 3/4 ways to deal with data + cloudbursts. Please bear with me... this is work in progress.

Data Governance, Data Stewardship and Data Residency:

Many of the issues relating to data in conjunction with cloudbursting are not new. When you stop to think about it, the 4 1/2 ways that Weinman outlines are variants of a generic data sharing problem across organizational boundaries. If we add any form of data sharing to the real cost of the enterprise data center, the issue we must address is that of Data Stewardship. It's been defined in various places, but here's one of my favorites since it places it in context with Data Governance.

Data Governance: The execution and enforcement of authority over the management of data assets and the performance of data functions.

Data Stewardship: The formalization of accountability for the management of data resources.

Data governance in the enterprise data center may require a 'complete' record to always be under the stewardship of the enterprise, and never at risk of being located in a different legal jurisdiction (e.g., the details of a financial transaction must remain in the immediate and direct control of the responsible financial institution). Examples abound, but one can point to financial and personal information which must, for compliance reasons, never leave the geographical borders of a country with stringent data protection regulation n (e.g., not in that cloud-resident datastore in India or Switzerland).

In these cases, the implications of cloud bursting on data may require the addition of data masking/data obfuscation, or applications which are demonstrably proven to operate on meta-data of other kinds without jeopardizing data stewardship compliance. This particular aspect of Data Stewardship is sometimes called the Data Residency Dilemma.

Getting to 4.75 - Data Governance Plus Cloudbursting: Even if, in addition to taking responsibility for data mirroring or replication to provide Business Continuity / Disaster Recovery, the enterprise is constrained from using Data + Cloudbursting because of the costs and constraints of data governance, the question arises: Are there services / technologies that can be provided by the *aaS supplier which can be brought to bear? To me, this appears to be a question of data center pragmatics rather than strictly an issue of recalculating the breakeven point.

There are many technologies for data sharing, some of which come into play for Data + Cloudbursting. When the solution requires extending the 'boundaries' of the enterprise in both the application and data domains (as we do with cloudbursting), the first question has usually been constructed as: Should the shared data reside inside or outside the firewall?

Elastic perimeter technologies: For cloudbursting with data 'leaving the building', elastic virtual private networks (such as CohesiveFT's VPN-Cubed , particularly their Data Center to EC2 version) address the underlying, network-oriented issues of wandering data.

Data masking & obfuscation: Conventional data encryption of "data at rest" does not satisfy the safety requirements of most enterprises when data is placed outside the corporate data center. Because the data must be decrypted when “in use” by a cloud-resident application image, conventional disk or file encryption does not protect against a compromise of or misuse by any systems processing the data. Suitably transformed portions (i.e. fields) can be used that provide integrity of the source data required for the application by means of data masking or obfuscation.

Meta-data & data virtualization: We're now starting to see, usually in conjunction with specific SaaS offers, data 'proxy' servers and other means that allow the enterprise to retain specific data elements 'locally resident' within the data center rather than residing 'in the clear' within a data cloud. What we can expect to see within the next year are solutions that provide this type of offer associated with Master Data Management technologies, or enhanced data-in-motion services provided by cloud service providers at all levels -- Iaas, PaaS and Saas. The most immediate utility of these offers will be for enterprises wishing to make real use of cloudbursting.


Joe Weinman broadened the definition of the real costs of an enterprise data center and has shown clearly how cloudbursting + pre-positioned data can contribute to addressing the BC/DR costs. Like BC/DR, the enterprise data center has to consider data governance in the context of interorganizational data sharing. Cloudbursting is just one form of data sharing, and presents the innovative cloud service an opportunity to provide generic solutions to data sharing governance for the enterprise.

Truth in advertising: In two of my recent entrepreneurial adventures (Univa and Safe Data Sharing) as well as two for whom I've acted as an advisor (Perspecsys and Replicus ), the problems of data stewardship and anticipatory data transport (e.g. moving/replicating the dataset well in advance) all come into play.