Big Data Is Not Enough

7 Min Read

big data and predictive analyticsBig data is the big buzz word in the world of analytics today. According to google trends, shown in the figure, searches for “big data” have been growing exponentially since 2010 though perhaps is beginning to level off. Or take a look on amazon.com for books with Big Data in the title sometime: the publication dates, for the most part, are in 2012 or 2013. 

big data and predictive analyticsBig data is the big buzz word in the world of analytics today. According to google trends, shown in the figure, searches for “big data” have been growing exponentially since 2010 though perhaps is beginning to level off. Or take a look on amazon.com for books with Big Data in the title sometime: the publication dates, for the most part, are in 2012 or 2013. 

But what’s the key to unlock the big data door? In his interview with Eric Siegel on April 12, Ned Smith of Business News Daily starts with this apt insight: “Predictive Analytics is the ‘Open Sesame’ for the world of Big Data.” Big data is what we have; predictive analytics (PA) is what we do with it.

Why is the data so big? Where does it come from? We who do PA usually think of doing predictive modeling on structured data pulled from a database, probably flattened into a single modeling table by a query so that the data is loadable into a software tool. We then clean the data, create features, and away we go with predictive modeling.

But according to a 2012 IBM study, “Analytics: The real-world use of big data”, 88% of big data comes from transactions, 73% from log data, and significant proportions of data come from audio and video (still and motion). These are not structured data. Log files are often unstructured data containing nothing more than notes, sometimes freehand, sometimes machine-created, and therefore cannot be used without first preprocessing the data using text mining techniques. For all of us who have built models augmented with log files or other text data, we know how much work is involved in transforming text into useful attributes that can then be used in predictive models

Even the most structured of the big data sources, transactional data, often are nothing more than dates, IDs and very simple information about the nature of the transaction (an amount, time period, and perhaps a label about the nature of the transaction).

Transactional data is rarely used directly; it is usually transformed into a form more useful for predictive modeling. For example, rather than building models where each row is a web page transaction, we transform the data so that each row is a person (the ID) and the fields are aggregations of that person’s history for as long as their cookie has persisted; the individual transactions have to be linked together and aggregated to be useful.

The big data wave we are experiencing is therefore not helpful directly for improving predictive models, we need to first determine the level of analysis needed to build useful models, i.e., what a record in the model represents. The unit of analysis is determined by the question the model is intended to answer, or put another way, the decision the model is intended to improve within the organization. This is determined by defining the business objectives of the models, normally by a program manager or other domain expert in the organization, and not by the modeler.

The second step in building data for predictive modeling is creating the features to include as predictors for the models. How do we determine the features? I see three ways:

  1. the analyst can define the features based on his / her experience in the field, or do research to find what others have done in the field through google searching and academic articles. This assumes the analyst is, to some degree, a domain expert.
  2. the key features can be determined by other domain experts either handed down to the analyst or through interviews of domain experts by the analyst. This is better than a google search because the answers are focused on the organization’s perspective on solving the problem.
  3. the analyst can rely on algorithm-based features creation. In this approach, the analyst merely provides the raw input fields and allows the algorithms to find the appropriate transformations of individual fields (easy) or multivariate combinations (more complex). Some algorithms and implementations of algorithms in software can do this quite effectively. This third approach I see advocated implicitly by data scientists in particular.

In reality, a combination of all three is usually used and I recommend all three. But features based on domain expertise almost always provides the largest gains in model performance compared with algorithm-based (automatic) feature creation.

This is the new thee-legged stool of predictive modeling: big data provides the information, augmenting what we have used in the past, domain experts provide the structure for how to set up the data for modeling, including what a record means and the key attributes that reflect information expected to be helpful to solve the problem, and predictive analytics provides the muscle to open the doors to what is hidden in the data. Those who take advantage of all three will be the winners in operationalizing analytics.

First posted at The Predictive Analytics Times

(Big Data growth / shutterstock)

Share This Article
Exit mobile version