Overview
Overview
I first became involved in commercial analytics in the eighties. First, through my involvement in customer segmentation and data visualisation, principally in banking but also in energy, manufacturing and the chemical industry. It also emerged later, in conjunction with my activities at the Sperry European Centre for AI, and was centred on pricing and yield management applications developed using a combination of statistical techniques, expert system technology and data centre architectures all tightly integrated within a 4GL development and delivery environment. This provided a comprehensive and seamless scenario building, hypothesis testing and reporting capability.
Today, commercial analytics is a considerable field of practice. It covers a wide range of analytics techniques and tools that businesses can experiment with and apply in the never-ending quest to become relevant, maximise advantage and stay competitive in increasingly complex commercial theatres and rapidly changing and disruptive circumstances. Not only that, it allows businesses to capitalise on emerging windows of opportunity, incongruence and market anomalies, ahead of the competition.
Simply stated, commercial analytics offers the possibility of not only being able to monitor and report on operational environment and the competitive forces inherent and existent in the business, but to rapidly develop and deploy beneficial, effective and coherent management strategies in a timely way and appropriate manner, based on the best possible data and the most appropriate application of the most appropriate analytics tools and techniques.
A brief look under the hood
To excel at commercial analytics requires an eclectic mix of skills, knowledge and experience – as well as real will – that covers decision science, statistics, risk reporting and management, machine learning, data warehousing, business intelligence, and where relevant, big data. Skills that will help to properly leverage analytics tools such as:
- Customer Value Analytics
- Credit Risk Modelling
- Marketing Campaign Optimisation
- Customer Churn Modelling
- Pricing and Yield Management
- Customer Segmentation
- Data Visualisation
- Propensity Modelling
- Fraud Detection Analytics
I will take a look of each of these techniques, and how they can be applied in a business setting.
Customer Value Analytics is essentially about predicting customer behaviour. One of the reasons why this avenue of analytics has opened up is largely thanks to pioneering work in areas such as data warehousing, and the possibilities that this opened up for the collection of additional customer information, obtained, for example, through the issuance and use of customer loyalty cards, or, for example, in the field of telecommunications, with the ability to analyse all attempted call records, whether the calls were successful or not. In short, Customer Value Analytics leans on the classic combination of data warehousing, data mining and predictive analytics to produce models that allow for the testing of commercial theories related to customer behaviour, behaviour manipulation and subsequent outcomes. In many businesses, Customer Value Analytics is an aspect of Know Your Customer, and forms an integral part of Customer Relationship Management systems and the quest for a more perfect 360 degree view of the customer.
Credit Risk Modelling is essentially about modelling all aspects of risk involving credit and works hand-in-hand with asset valuation models. In simple terms, credit risk modelling is a calculation of the likelihood of a loan-repayment default, but in broader terms credit risk modelling covers concentration and systemic credit risk. Lenders, of all types, use Credit Risk Modelling to classify customers and prospects based on risk, and then choose suitable credit strategies based on the calculated risk. Credit risk modelling forms part of credit scoring for products such as credit cards and overdrafts, where the model is used to set credit limits. Models can also be used to calculate the amount of collateral required in order to ensure that a loan is asset backed. Aspects of credit risk modelling include: counterparty credit risk; expected loss; loss given default; probability of default; quantitative credit analysis; value at risk; and, potential future exposure.
Marketing Campaign Optimisation is about collecting and analysing the necessary data in order to be able to derive informed business decisions regarding marketing campaign strategies and mixes, with particular focus on maximising the lifetime value of a customer over time. Or in an ideal world, it’s about getting the right product, with the right message in the right place at the right time at the right price in front of the right person. Typically this type of analysis aims to identify areas of improvement that can be easily identified and addressed, and also provide the basis for long-term optimisation, that is, the improvement of a customer’s overall lifetime value. Market campaign optimisation relies heavily on campaign identification methodology: tool configuration and customisation; and, data integration. So, the product of a value chain that starts with data warehousing and passes through an integrated customer view; customer profiling; segmentation; modelling; scoring; net present value calculation; and, campaign measurement. Passes through planning and then proceeds to execution, including the selection of appropriate marketing channels.
Customer Churn Modelling is an interesting but difficult problem of customer attrition, and is increasingly another measure of the value of some businesses. Churn is about risk, cost and value. The risk of losing a customer. The cost of retaining or acquiring a customer. The value of retaining a customer and the value of that customer. Generally, acquiring a new customer is usually more costly than retaining an existing customer (the acquisition cost), so it makes sense to seek to retain the customer base whilst attempting to attract new customers. Customers may churn voluntarily in that they make a conscious decision to switch to a competitor, or they may churn involuntarily, for reasons of health, location and financial circumstances. Involuntary churn is generally excluded from customer churn modelling. Customer churn modelling is essentially about predicting customer attrition in order to target and reduce that attrition through the intentional management of customer attrition. Again, the analysis required to understand customer churn needs to have access to data that allows for behavioural modelling and for the modelling of the effects of deliberately attempts to incentivise changes in the behaviour of customers who are at risk of churning. By modelling acquisition costs, propensity to churn and predicted value, we can get a grasp of the risks posed to the customer base, the value of those risks and the acceptable costs of mitigating those risks.
Tips: Successful churn management requires the iterative rebuilding of churn predictive models, as customer sensitivity and behaviour change occurs and new factors and market forces are discovered. It requires the close collaboration of multiple disciplines involved in sales management, operations management, IT, legal departments and marketing. It also depends heavily on the availability of timely, adequate and appropriate data and the technology needed to interpret and draw applicable generalisations from that data.
Pricing and Yield Management is again about understanding, anticipating and influencing customer behaviour in order to maximise revenue or profits from a finite resource or asset, such as airline seats, hotel rooms, advertising inventory and certain forms of energy generation and distribution. As mentioned previously, I was involved in an Airline Revenue Enhancement product design and development at Unisys in the late eighties. The approach then was to use a combination of statistics and predictive analysis, together with expert knowledge captured and represented in business rules and processed through a backward, forward and procedural chaining inference-engine. In order to successfully implement effective pricing and yield management it is necessary to have adequate, appropriate and timely data on the fixed amount or resources available for sale; the window for selling the perishable product; and, predictive data on what different customers are willing to pay for using the same type and class of resource.
Tips: Issues and facets to be aware of in using pricing and yield management include geo-marketing; variable pricing; price discrimination; last minute advertising; and, revenue prediction, enhancement and management. Do not overlook the aspects of behavioural economics related to pricing, yield management and customer behaviour. Beware of allowing too much override of pricing and yield management models as ” people tend to price too high when they have high levels of inventory and too low when their inventory levels are low”, whereas generally data-driven models have no such in-built predilections.
Customer Segmentation is about understanding the customer, it is one of the classic modelling and analysis targets of data warehousing and CRM. Customer segmentation leans heavily on demographic analysis, clustering, geography, time-variance, psychographics and behaviour, and is aimed at maximising the effective use of marketing resources and in maximising cross-selling (the Amazon style ‘customers who bought this, also bought this’) and up-selling (‘would you like to supersize that?’ or ‘try something different’) opportunities. Customer segmentation may be coarse grained or fine-grained, it may consist of multiple layers and multiple perspectives and the slicing, dicing and extrapolation of data across multiple dimensions and categories over time. Again, customer segmentation is about understanding, predicting, modifying and exploiting customer behaviour, typically for commercial treasons, but it is also a technique used by political parties in order to maximise their attractiveness to the voters.
Data Visualisation in commercial analytics can be used to visually detect clear patterns, correlations and trends that may not be so obvious to automated data mining and pattern recognition software. There are literally hundreds of data visualisation products to choose from, some of which can be used in conjunction with other analytics tools and techniques. A wide range of data visualisation approaches are used to visualise data from a full range of data sources, including IoT, Big Data, Big Data analytics, statistics, analytics, data science, data warehousing, business intelligence, operational databases, commercial databases, and data enrichment.
Tips: Data visualisation technologies are attractive, easy to relate to and are frequently very accessible. By the same measure, they can also lead to incorrect inferences and correlations, over-simplification and the ignorance of important nuance and causation. The trick is to simplify as much as one can, but without losing any relevant content. So, use with care.
Propensity Modelling allows one to create hypothesis regarding the likelihood that a prospect or customer will buy a particular product or service. They are ‘propensity to buy’ models. It’s essentially about “you liked that, so you may like this”. This is achieved by specifically applying the generalisations obtained from tendency predictions, from coarse grained to fine grained analysis. In simple terms, this is becoming a classic form of predictive analytics. Fundamentally propensity modelling helps target the right customers, optimise resources and significantly increases the probabilities of a positive response from advertising campaigns by removing the need for a generic blanket approach. This type of propensity modelling helps companies like Netflix to recommend TV shows and films based on an interpretation of our propensity based on our past behaviour, and the analysis of similar behaviour exhibited by other customers. At this level it’s similar to fuzzy analysis and fuzzy case study matching. It can also be used in successful cross-selling and up-selling. Although without a sufficient supply of adequate, appropriate and timely information, the suggested options may not always be close to the mark, indeed, too much data can also skew predictive recommendations that fail to capture the nuances of individual and collective preference, behaviour and choice.
Tips: It’s worth noting that Propensity Modelling is closely related to Uplift Modelling and Churn Modelling. It has also been applied to political campaigning and segment-of-one personalised medicine. Advances in data warehousing means that propensity modelling is far more sophisticated than just the pigeon-holing of customers into traditionally simplistic categories of persuadable; sure things; lost causes; and, sleeping dogs. Propensity Modelling relies on interpretations of the past in order to predict the future, frequently with the view of achieving distinct outcomes, this propensity to lean to heavily on predicting the future from the past, needs to be frequently tempered with large doses of realism and an acute attention to what is actually occurring in the market place and amongst customers, prospects and other punters.
Fraud Detection Analytics have been around for a long time and have typically relied on traditional means of data analysis. The first lines of business to introduce and use fraud detection were the banks, telecoms and insurance companies. Fraud management is a data, process and knowledge intense activity. Techniques for fraud detection analytics rest on three pillars, statistical analysis, artificial intelligence and data warehousing.
Statistical approaches to fraud detection include data pre-processing (with the help pf data warehousing); calculation of statistical constraints and categories; models of probability distribution; profiling; time series analysis of time dependent data; clustering, classification, categorisation and association; and, matching.
Ai in fraud detection analytics includes data mining; expert systems; pattern recognition (also part of data mining); additional machine learning techniques – including supervised and unsupervised learning; and, artificial neural networks (for, for example, identifying relevant yet superficially counter-intuitive patterns that would otherwise not be detected. Other techniques used in fraud detection analytics include Bayes (to predict the probability of an event, based on conditions that might be related to that event); decision theory; sequence matching; and, signal interpretation.
Tips: It should also be noted that there is a high degree of crossover in the previously identified techniques, and that much of the contemporary literature on fraud detection analytics is often tainted with imprecisions, confusion and contradictions, so, beware of the terrain.
That’s it folks
As I stated at the outset, the playing fields of commercial analytics are immense and cover a number of interrelated disciplines, methods and techniques. Indeed, many of the themes mentioned briefly here are also very comprehensive, extensive and involved in their own rights.
Care must also be exercised because we are also looking at some fields in which competing theories, methods and approaches abound, so it pays to be cautious, incredulous and well advised.
Commercial analytics is essentially about the customer, the prospect, the markets, risk and reward. Yes, it can also be about cost reduction, faster and better decision making, and new products and services, but it is really about much more than that. In a sense, the heart of commercial analytics can also be found at the heart of a business’s reason for being.
Also remember that the best applications of commercial analytics are all based on business imperatives, drivers and initiatives and the quality formulation, design and support of commercially advantageous strategies, tactics and operational decisions. Or, simply stated, have a good reason for doing it, and you won’t go far wrong.