AWS re:Invents Workflow and Hybrid Storage

While the news about AWS RedShift had the 'drama' and novelty, it implies an attention to enterprise customer requirements that is ALSO found in an important, but less heralded, service and a ground-breaking partnership. 

Data Pipeline: The workflow services with which users can create a variety of reasonably straightforward data processing workflows, with all of the major AWS services and their 'manageable' objects, are now capable of being included in a work flow.  While it won't be the tool of choice for the expert DBA, it will be appropriate for the work-group user that signs up to use AWS rather than the Corporate IT resource.  Over time, this will become more sophisticated.

NetApp Private Storage for AWS: This 'joint infrastructure' offering allows customers to utilize both private and public cloud resources, and is one of the only services I have seen that builds on the AWS Direct Connect capabilities announced last year.  This starts to address a set of requirements that have been called out by enterprise IT related to safe and performant data storage (and data transport) from on-premise data center to managed data center to AWS.  It begins to take into account the data residency and data privacy issues about which enterprise IT has been most vocal.

That said, as important as the AWS private storage services is the way in which AWS must now address the issues of contractual responsibility and liability of AWS data 'stewardship.' Similarly, AWS owes 'the enterprise' some clarity about the respective contractual responsibilities of AWS and its enterprise customers when using AWS' multi-tenant resources.   When these issues are addressed to the satisfaction of enterprise IT hard-cases, the compliance auditors, and PII regulators, the resulting explosion of cloud usage in hybrid environments by enterprise customers will dwarf the last two years' growth of AWS… and that's saying something.


Splunk IPO - The Joys of Data Exhaust

Since I first heard about Splunk (probably in early 2005) I found the whole notion of using the log files as a source of actionable information to be a perfect exemplar of the 'data exhaust' principle.  To see them do so well over the course of the past 6 years and top it off with this IPO makes me very happy for all involved ... kudos (particularly to Michael Baum who led the charge in the company's infancy)!

I've spoken about this for years, but have never done justice to the idea in a really thoughtful post.  But, don't worry... I won't do that here.  At least, not yet.  

The fundamental idea is: Extract value from the operational data generated by the transactions that are (ostensibly) the primary business function.  If the situation is really an exemplar of the 'data exhaust' principle, the by-product becomes more valuable than the 'primary' business.  

I've enjoyed (and benefitted from) a few businesses that executed on this model.  Instill (acquired in 2008 by iTradeNetwork) was one of the first SaaS companies on the planet, offering order entry and order management to the biggest buyers of food -- the big restaurants, chains and institutional food operations (e.g. schools, hospitals).  They were so prevalent in the institutional food business that they became the trusted source of market data and a means of verifying whether pricing triggers built into national sales contracts had been reached.  (Hey... if you're Olive Garden, buying 43 gazillion metric tons of grated parmesan a year, you want to know when you've hit the 40 gazillion tier, and are saving a buck on every ton!)

But, back to Splunk.  They've paved the way for an incredibly valuable set of companies which now use many of the same principles for different aspects of event analysis, capacity planning, configuration, accounting and security in both conventional data centers and infrastructure services (aka IaaS cloud). They've definitely delivered value to their customers and their investors.  So, ... good on you, Splunk.


Splunk IPO kills, lives up to expectations : GigaOm

The widely anticipated Splunk IPO has not disappointed, with shares up nearly 90 percent from their opening price at one point Thursday. This initial excitement is hardly surprising given the interest in big data in general, and Splunk in particular. The term ?big data? refers to massive amounts of structured, unstructured and semi-structured data that is generated not only by standard-issue computers but sensors and all manner of devices and machinery not to mention social networks. This is information that?many companies want to winnow for useful insights. Splunk?s technology searches, analyses and visualizes that data. Customers include Bank of America, Comcast, and Zynga.



Why Uber's data fascinates a neuroscientist

This is a particularly good interview about data analysis and emergent behavior.  Obviously, as the CEO of a company engaged in the creation of transportation analytics, this holds some pretty significant interest for me!

Why Uber's data fascinates a neuroscientist - O'Reilly Radar:

Bradley Voytek: A lot of people think that Uber is just a car service, and that we figure out where to pick people up and where to take them. But as a cognitive neuroscientist, of course I'm interested in human behavior. To me, the coolest thing about what we can learn when we take a deeper look at Uber's data is how people move around a city. We get a little glimpse at how people flow, what neighborhoods are connected, where people go to party on weekend nights. It's fascinating.



Don’t call it a game: How Draw Something hit 30 million downloads 

I enjoyed this quick interview with Dan Porter, founder and OMGPOP CEO. 

Draw Something, the No. 1 app right now on iOS and Android, is listed as a game and draws a lot of comparisons to the family game Pictionary. But the funny thing is that it’s not really a game at all.

H/t to GigaOm.


Page 1 ... 6 7 8 9 10 ... 14 Next 4 Entries »