This QA first appeared in Data Science Briefings, the DataMiningApps newsletter as a “Free Tweet Consulting Experience” — where we answer a data science or analytics question of 140 characters maximum. Also want to submit your question? Just Tweet us @DataMiningApps. Want to remain anonymous? Then send us a direct message and we’ll keep all your details private. Subscribe now for free if you want to be the first to receive our articles and stay up to data on data science news, or follow us @DataMiningApps.
You asked: Regarding fraud analytics: how do you cope with fraudsters continuously changing their patterns?
Good question, the dynamic nature of fraud is indeed a hard issue to tackle: fraudsters try to constantly out beat detection and prevention systems by developing new strategies and methods. Therefore, so called adaptive analytical models as well as detection and prevention systems are required, in order to detect and resolve fraud as soon as possible. Detecting fraud as early as possible is crucial. Hence, it is important to continuously backtest analytical fraud detection models, instead of just testing them once before putting them into production. The idea here is to verify whether the fraud model still performs satisfactory, and do so frequently. Changing fraud tactics creates concept drift implying that the relationship between the target fraud indicator and the data available changes on an on-going basis. Hence, it is important to closely follow-up the performance of the analytical model such that concept drift and any related performance deviation can be detected in a timely way. Depending upon the type of model and its purpose (e.g. descriptive or predictive), various backtesting activities can be undertaken. Examples are backtesting data stability, model stability and model calibration.