This post is contributed by Revolution Analytics CEO Norman Nie, and cross-posted from the Future of Open Source Forum.
This post is contributed by Revolution Analytics CEO Norman Nie, and cross-posted from the Future of Open Source Forum.
A lot of attention has been focused recently on Big Data, and rightly so: Big Data is a Big Deal. (See this LinuxInsider article, Big Data, Big Open Source Tools, for a compehensive overview of Big Data issues.) But what, exactly, is Big Data? Ed Dumbill of O’Reilly defines Big Data as “data that becomes large enough that it cannot be processed using conventional methods”. That’s a good definition: Big Data is primarily about your capability of making use of the data, not just its size in terabytes or petabytes.
In that same article, Dumbill coins a term to represent the emerging software infrastructure — typically open source, distributed, and running on commodity hardware — for handling Big Data: “A stack for big data systems has emerged, comprising layers of Storage, MapReduce and Query (SMAQ).” You might notice that a letter remains undefined in the acronym, SMAQ. I think it’s obvious what the ‘A’ should stand for: A stands for Analytics.
Because while much attention has been paid to storing, organizing, and accessing Big Data, the real value of data is in what you can do with it. The goal is simple: achieve return on what’s likely to have been a significant investment in collecting and storing information, by using data analysis to better understand your business and anticipate future outcomes. To achieve this goal, on top of a platform for storing and accessing data we also need a layer for analyzing the data: a platform where the talents of skilled data scientists can be brought to bear to combine the latest statistical and machine learning techniques to create predictive models of unprecedented power. And on top of that we need another layer, to be used by the ultimate consumer of the intelligence: a platform for the business user looking at reports, or interacting with a BI dashboard, for example. As with the SMAQ stack, these layers will typically be open-source, distributed, and designed to exploit the power of multi-processor commodity hardware.
Together, these data, analysis and reporting layers form an emerging software stack for deriving value from data: an open-source Analytics Stack, if you will. Where SMAQ provides a platform for Big Data storage and access, the open-source analytics stack provides a platform for extracting intelligence and providing insights from those data.
In other words, the end-result of the open-source analytics stack is high-value, easy-to-use business intelligence. From a purely business perspective, the true value of the stack is its ability to generate predictive analytics at a comparatively low cost. Unlike monolithic BI platforms, which excel primarily at providing tabular summaries of historical data, the open-source analytics stack is forward-looking: predictive models provide estimates of future outcomes and (equally important) the uncertainty around those estimates. That makes it ideal for business scenarios in which competition is ultra-intense and market demands are constantly changing.
But hold on a second: what about those traditional software vendors with their monolithic BI solutions? Don’t some of those heavyweight solutions aim to provide the same kind of usable business intelligence that we’re talking about?
To be fair, the answer is yes — sort of. But when you compare the cost and flexibility of open-source resources to the cost and flexibility of proprietary systems, there’s no contest.
Open source wins, hands down.
Let’s take a moment to look within the multiple layers of the open-source analytics stack. The bottom layer is your operating system, typically a Linux variant, running on inexpensive yet powerful multiprocessor hardware in a local cluster or perhaps in the cloud. Then you have a data layer, which would be Hadoop, MySQL or some similar tool that enables you to reliably and efficiently store your data in volume. Next is an ETL layer, for processing the data and extracting segments for analysis. On top of that is the analytics layer for which R, the open-source programming language that has become the lingua franca of statistical analysis, is the ideal candidate. And on top of the analytics layer is the presentation or BI layer, through which the results of these predictive analytics models will be conveyed.
With all of these layers, interoperability is really critical. As the value of the open source analytics stack becomes more widely acknowledged, more players will emerge to provide tools that work and play well with the other tools in the stack. Companies such as Cloudera, Talend, Revolution Analytics, and Jaspersoft already have products available that work together to make the open source analytics stack a fully-functioning reality. It’s entirely reasonable to assume that at some point in the near future, dozens of companies will offer products and services designed expressly to work within this stack or to support value-added operations around it.
These are indeed exciting times, and it will be fun to watch this open-source analytics stack evolve. Given the size, passion and commitment of the open-source community, I predict that its evolution will be quick and dramatic.
I also can’t help wondering what will happen to the traditional vendors and their monolithic BI solutions. Frankly, I think they’re on the wrong evolutionary track.
For the business user, the key takeaway is that this data analytics stack, built on commodity hardware and leading-edge open-source software, and a is a lower-cost, higher-value alternative to the existing status quo solutions offered by traditional vendors. Just a couple of years ago, these types of robust analytic capabilities were only available through major vendors. Today, the open-source community provides everything that the traditional vendors provide — and more. With open-source, you have choice, support, lower costs and faster cycles of innovation. The open-source analytics stack is more than a handy collection of interoperable tools — it’s an intelligence platform.
In that sense, the open-source analytics stack is genuinely revolutionary.