— Posted by Carole-Ann In the past few weeks I have been briefing our internal teams, external customers and industry analysts on our Decision Improvement capabilities. One of the topics that was the most foreign to the BRMS addicts was…
— Posted by Carole-Ann
In the past few weeks I have been briefing our internal teams, external customers and industry analysts on our Decision Improvement capabilities. One of the topics that was the most foreign to the BRMS addicts was Data Management, although it is one critical piece of Decision Improvement.
Let me set the stage here. Decision Management covers obviously Decision Automation, with a particular emphasis on Business Rules, Predictive Analytics and Transactional Optimization. The idea is to combine a set of decisioning technologies to make decisions in your Production systems. Decision Improvement is the discipline of analyzing those decisions that are made day in and day out, and explore if and how those decisions could become more profitable, efficient, safe, etc.
Very soon you will hear a lot more along those lines with Decision Simulation capabilities that will complement other techniques such as Decision Modeling and Decision Optimization that have been applied very successfully in Financial Services (sub-prime crisis notwithstanding).
Although we have been worshiping Data in our predictive analytics or decision optimization activities, those were taboo most of the time in the BRMS world. Why is that you may ask? Very simply because Business Rules technologies have been extremely careful to integrate with data. Data always belonged to the application. Having proprietary data models or data storage was bad. We all focused on making it easy to get access to database records, XML documents, Java objects, .NET assemblies, COBOL copybooks, etc. Since we do not own the representation, it would be counter-intuitive to own the storage, wouldn’t it?
Well things are changing now with the need in the industry to improve (objectively) those decision services. With data, a brand new world of opportunities opens up. With potential, comes new architectural challenges of course.
When you simulate a decision service, the PKIs that you drive from each run are only as good as the data that you use. This refers to the general Garbage In / Garbage Out principle. While business users become more savvy in those kinds of techniques and technologies, I anticipate that data warehousing technologies will find increased steam. Although this is not at all new, actually pretty classic, I expect the Decision Management ecosystems to include more often than not datamart technology.
This goes beyond the traditional database component in the traditional software architecture as the historical data will need to be collected religiously, scrubbed and eventually massaged for future usage in the Decision Simulation applications. This will stress more of the ETL capabilities, and to some extend continuous intelligence or real-time OLAP may become a medium for filtering out relevant data to be collected — this is something I may discuss at greater length in another post. Data preparation is a discipline that we do not often find in BRMS projects today. The art of slicing and dicing data will allow to better target various segmented customer populations, but also empower the business users in the sense that they may potentially get greater coverage with less data via sampling and weighting.
Historical Data is king of course, either proprietary or pooled. That being said, you can benefit from Decision Simulation even if you do not have physical data. If you know enough about your business and the expected customer data distribution, you can fabricate data and make it available for simulation. This allows simulation capabilities to be deployed on day 1, and then enhanced by historical data as it becomes available.
Not started yet?
If you are not yet thinking about Decision Improvement, but you have already started deploying Decision Automation projects, I highly recommend that you start thinking about data collection. Science or Art, it takes practice to identify the appropriate data elements, calculations and decisioning artifacts that will be useful in the future. So get started!
More food for thought…
Offline Decision Improvement is only half of the story. Financial institutions have used Champion / Challenger strategies in Production for years. The data we apply strategies to in that case is the real-time Production data, not previously processed Production data aka historical data.
If you are not familiar with Champion / Challenger methodology, here is a simple definition: the idea is to route some percentage of the transactions to the Challenger and the rest to the Champion, in a statistically relevant manner. That allows to try out a new Challenger strategy without shifting the entire portfolio in the traditional big bang approach and therefore reducing risk. Portfolio performance can be charted and analyzed to decide which strategy performs better according to the business objectives.