One model or multiple models?
Several articles and blog posts have been written on Predictive Analytics and its role in improving business processes, reducing operational costs, increasing revenue among other things (see for example Eric Siegel’s article on Predictive Analytics with Data Mining: How It Works). In spite of its widespread use and popularity, we often hear, “Past performance is no guarantee for future results…” Obviously, a question arises naturally – How to improve predictive accuracy and hence make it more reliable? This post discusses one such possible solution.
To understand the logic behind the solution, consider the following scenario: Suppose your Business Intelligence software has developed a suitable regression model to forecast Sales Volume for the next quarter after following all the steps of the model building process scrupulously. Further, suppose that the model is validated by employing one of the standard model validation procedures such as Cross-validation or bootstrap. Now your model (“Expert”) is ready for deployment. …
One model or multiple models?
Several articles and blog posts have been written on Predictive Analytics and its role in improving business processes, reducing operational costs, increasing revenue among other things (see for example Eric Siegel’s article on Predictive Analytics with Data Mining: How It Works). In spite of its widespread use and popularity, we often hear, “Past performance is no guarantee for future results…” Obviously, a question arises naturally – How to improve predictive accuracy and hence make it more reliable? This post discusses one such possible solution.
To understand the logic behind the solution, consider the following scenario: Suppose your Business Intelligence software has developed a suitable regression model to forecast Sales Volume for the next quarter after following all the steps of the model building process scrupulously. Further, suppose that the model is validated by employing one of the standard model validation procedures such as Cross-validation or bootstrap. Now your model (“Expert”) is ready for deployment.
Let us compare the above strategy with a real-life scenario: When the Board of Directors has to take a critical decision, several experts are consulted instead of just one. If that is the case, then when a critical futuristic revenue generation or cost cutting plan has to be launched, why should we not think of using multiple models to base our decision upon instead of just concentrating on one as planned above? Precisely, we are going to do this here.
Bootstrap Aggregating (Bagging)
This technique was initially proposed by Breiman (1996) to improve the predictive reliability of Decision Trees. Bagging and Boosting are the two strategies that are used to increase the predictive accuracy. In this post we will discuss the Bagging technique.
Traditionally, a predictive model – say a regression model or a Decision Tree is developed using a given training set D. In the Bagging method, D is split into some smaller sets of samples Di of the same size as that of D (i = 1, 2, 3….k; where, k = some suitable number). These sets are selected by generating random samples with replacement, called Bootstrap sampling from the original set D. Based on each bootstrap sample, a predictive model is developed. With this, you will get an ensemble of k models as shown in the figure below:
If your goal is to predict the values say Sales Volume for the next quarter, then the Bagging rule is to use each model Mi to predict future sales and finally obtain the average predicted value. If your goal is to build a classifier- say to identify a churner or a loan-defaulter, then using each of the k models, classify a customer as a churner or loan-defaulter and base your final decision on ‘majority vote’.
The bagged prediction or classifier often has more accuracy than a single model or classifier based on the data D. This happens because the aggregation process reduces the instability or variability present in a single model. The following case illustrates the advantage of Bagging:
Sales Forecast
Let us fit a regression model to predict Sales Volume based on the amount spent on Advertizing. After fitting is done, apply Bagging tool to obtain the predictions. The above bar chart displays the Sales forecast before Bagging (blue bars) and after Bagging (green bars) process along with their confidence levels. As you can see, Sales forecasts after Bagging are far more reliable.
In my next post, I will be discussing about the Boosting technique to improve predictive accuracy.