Mike Gualtieri published a nice piece on business rules engine algorithms last July that I wanted to point out to my readers. Mike summarizes the mainstream rules engine algorithms into those that deliver inferencing at run time, those that execute…
Mike Gualtieri published a nice piece on business rules engine algorithms last July that I wanted to point out to my readers. Mike summarizes the mainstream rules engine algorithms into those that deliver inferencing at run time, those that execute sequentially and those that execute sequentially but have compile-time algorithms to sequence rules correctly.
While I have a few comments on Mike’s report, I was struck both by its measured tone and a great piece of advice:
Let Authoring Flexibility Drive Your Algorithm Decision
This is key. The extent to which the tool allows you to write authors the way you need to write them, the way your business users need to write them, is what matters. It is the flexibility and agility that business rules give you that is the primary value driver. Pick your vendor based on how the rule editing and management environment will work for you. The capabilities of the vendor’s algorithm(s) will impact this but they are just part of the puzzle – the kind of editing and management environment will matter more. Most of the major rule vendors will do a good job on performance, if
you use the tools the way they are intended and don’t try and force-fit
your previous programming experience too much.
If you are interested in this topic, buy the report (it’s a good one). I would just add a couple of things:
- I think he under-calls the potential for inferencing engines to run
faster than sequential when a very large number of rules exist but
where each transaction only fires a tiny percentage (common in
regulatory compliance) for instance - Some vendors allow different algorithms to be used in different steps in a decision, a useful feature
- I have never found a Rete user who had trouble recreating a bug. The data in a transaction determines the sequence of execution of rules and the same data/ransaction will reliably drive the same sequence of execution. Sure different data results in a different order of execution but that does not have any impact on recreating a bug
- I think the ability to integrate predictive analytics with business rules is already bringing new algorithms to bear. A decision tree built using a genetic algorithm might execute the same way any other decision tree does but it shows the results of the new algorithm just the same.