It’s hard to pick up a newspaper these days without seeing companies cutting more costs. Part of this story is that companies are shifting their spending to invest in a new flavor of business intelligence technology that predicts the buying behavior of each customer or prospect – predictive analytics.
Predictively modeling customer response provides something completely […]
It’s hard to pick up a newspaper these days without seeing companies cutting more costs. Part of this story is that companies are shifting their spending to invest in a new flavor of business intelligence technology that predicts the buying behavior of each customer or prospect – predictive analytics.
Predictively modeling customer response provides something completely different from standard business reporting and sales forecasting: actionable predictions for each customer. These per-customer predictions are key to allocating marketing and sales resources. By predicting which customer will respond to which offer, you can better target to each customer.
As your company prepares to deploy a predictive model, there are best practices that avert the risk the model won’t perform up to par. Here are three guidelines to ensure this risk is minimized.
1. Don’t evaluate the predictive model over the same data you used to create it.
When evaluating a predictive model, never test it over the same data that you used to produce it, known as the training data. The data used for evaluation purposes must be held-aside data, called test data, which provides an unbiased, realistic view of how good the model truly is. If it’s not doing well on that data, you need to revisit model generation, change the data, or change the modeling method until you get a better one.
2. Only deploy your predictive model incrementally.
Once you have a predictive model that looks good and ready for deployment, start by deploying it in a “small dose”. Keep the current, existing method of decision-making in place, and simultaneously – perhaps 5% of the time – employ the predictive model. This way, the old and the new stand in contrast, so you can see whether indeed the value of the model is proven – that profits have increased or that response rates have increased.
3. Always maintain and test against a control set.
Finally, in similar vein to (2) above, keep this kind of A-B testing in place moving forward, pitting “use the model” against “don’t use the model”. Ideally, you always keep that going, so that you have a small control set for which things continue the old way, or, in any case, for which decisions are automated in a way that does not require a predictive model. This serves as a baseline against which the performance of the predictive model is constantly monitored. This way, you are alerted when a predictive model’s performance is degrading, at which point it’s time to produce an updated model over more up-to-date data.
In sum, by following these best practices, your company can benefit from the accurate targeting of predictive analytics while minimizing risk.
For further predictive analytics reading, case studies, training options and other resources, see the Predictive Analytics Guide.