For years, I have been trying to get to the bottom of what type of self-learning predictive models and fraud scoring systems the vendors I cover provide. I often got the impression, that in many cases, it was a bit of a Wizard of Oz scenario, with some guys sitting behind a big door or curtain, mining the data and writing rules for each of their customers, based on the fraud they experienced and confirmed.
This was the real reason ‘tuning’ was needed and the systems did not work well out of the box, because the guys or gals hadn’t yet ‘tweaked the models for the organization’ meaning they hadn’t yet mined the company’s data and written the rules accordingly. This becomes especially problematic when the company doesn’t have any confirmed fraud. The irony in these situations is that you can’t prevent fraud until you experience enough of it!
It was also the main reason the models ‘degrade over time.’ The rules stop working once the bad guys catch on to them, so the cycle of data mining and rule creation by the guys and gals behind the curtain must start once again, sometimes costing the customers tens of thousands of dollars if not more.
I continue to learn that this is pretty much the way many of these ‘predictive models’ work. Most of them are essentially just rules.
The only time models can run ‘out of the box’ is when the customer’s situation is akin to their peers and the model is built on consortium data where the confirmed fraud of others’ experiences can help pinpoint fraud for each participant of the consortium. In most cases, vendors that base their models on consortium data use predictive modeling and scoring techniques, e.g. based on neural or bayesian networks, more often than the vendors who don’t. But consortium models have had their limits because many companies don’t want to share their fraud data with anyone – not the authorities, not their competitors and not the vendors.
Further, self-learning models aren’t a reality in fraud management, at least from what I have seen. The vendors have to run their own analyses to find the outliers – or the transactions not evaluated by the model – and then figure out what they have in common so they can manually adjust the model to take them into account.
In any event, the next time a vendor’s model seems like a black box, it probably means there are a few geeks behind the curtain mining your data and building rules. If nothing else, they should make it clear that after a set period, those rules will become ineffective so you will have to invite them back — and pay them a considerable amount of money — unless you’ve learned to write your own rules and ‘models.’
Category: Uncategorized Tags: