I recently heard about the KDD challenge this year. Its a telco based challenge to build churn, cross-sell, and up-sell propensity models using the supplied train and test data.
For more info see;
http://www.kddcup-orange.com/index.php
I am not able to download the data at work (security / download limits), so I might have to try this at home. I haven’t even seen the data yet. I’m hoping its transactional cdr’s and not in some summarised form (which it sounds like it is).
I don’t have a lot of free time so I might not get around to submitting an entry, but if I do these are some of the data preparation steps and issues I’d consider;
– handle outliers
If the data is real-world then you can guarantee that some values will be at least a thousand times bigger than anything else. Log might not work, so try trimmed mean or frequency binning as a method to remove outliers.
– missing values
The KDD guide suggests that missing or undetermined values were converted into zero. Consider changing this. Many algorithms will treat zero very differently from a null. You might get better results by treating these zero’s as nulls.
– percentage comparisons
If a customer can make a voice or sms call…
I recently heard about the KDD challenge this year. Its a telco based challenge to build churn, cross-sell, and up-sell propensity models using the supplied train and test data.
For more info see;
http://www.kddcup-orange.com/index.php
I am not able to download the data at work (security / download limits), so I might have to try this at home. I haven’t even seen the data yet. I’m hoping its transactional cdr’s and not in some summarised form (which it sounds like it is).
I don’t have a lot of free time so I might not get around to submitting an entry, but if I do these are some of the data preparation steps and issues I’d consider;
– handle outliers
If the data is real-world then you can guarantee that some values will be at least a thousand times bigger than anything else. Log might not work, so try trimmed mean or frequency binning as a method to remove outliers.
– missing values
The KDD guide suggests that missing or undetermined values were converted into zero. Consider changing this. Many algorithms will treat zero very differently from a null. You might get better results by treating these zero’s as nulls.
– percentage comparisons
If a customer can make a voice or sms call, what’s the percentage between them? (eg 30% voice vs 70% sms calls). If only voice calls, then consider splitting by time of day or peak vs offpeak as percentages. The use of percentages helps remove differences of scale between high and low quantity customers. If telephony usage covers a number of days or weeks, then consider a similar metric that shows increased or decreased usage over time.
– social networking analysis
If the data is raw transactional cdr’s (call detail records) then give a lot of consideration do performing a basic social networking analysis. Even if all you can manage is to identify a circle of friends for each customer, then this may have a big impact upon identification of high churn individuals or up-sell opportunities.
– not all churn is equal
Rank customers by usage and scale the rank to a zero (low) to 1.0 score (high rank). No telco should still be treating every churn as a equal loss. Its not! The loss of a highly valuable customer (high rank) is worse than a low spend customer (low rank). Develop a model to handle this and argue your reasons for why treating all churn the same is a fool’s folly. This is difficult if you have no spend information or history of usage over multiple billing cycles.
Hope this helps
Good luck everyone!