This article was originally published in the Fall 2012 edition of OnAnalytics, published by the Institute for Business Analytics at Indiana University’s Kelley School of Business.
This article focuses on insights from Joseph Cherianthundam, who at the time of this writing was a Managing Director specializing in financial analytics for StoneTurn, and international dispute consulting and financial investigations firm,
One of the many ways that businesses can use analytics to improve their financial health is in the area of occupational fraud detection. This type of fraud, in which an employee abuses his or her role for personal gain, is estimated to cost organizations an average of 5% of annual revenues, or a worldwide total loss of $3.5 trillion (Association of Certified Fraud Examiners, 2012). There are many sophisticated techniques now used to detect occupational fraud including:
- Segmentation and classification of transactions to model normal and abnormal patterns
- Encoding programs to deliver alerts when abnormalities are detected
- Testing for missing/anomalous values in sequential and other data
- Analyzing text data – for example, addresses – to identify suspect entries
Many analytics-driven fraud detection methods search for outliers – data points that don’t fit the normal transaction patterns. The problem is that dealing with massive amounts of data introduces a number of naturally occurring outliers. This creates one of the biggest challenges when introducing today’s analytics into the issue of fraud: false positives. Specifically, false positives lead to unnecessary costs for a company as they review the related employees/transactions to determine the validity of the alerts. For this reason, it is vital as analysts that we seek ways to limit alerts that fit this error type.
What we don’t want to do, however, is abandon these tests altogether as there is often a value to be gained from them. A better strategy can commonly be derived by complementing your quantitative data with qualitative data to reduce the occurrence of false positives. I’ll offer two examples of common fraud tests that can deliver useless results when they are applied without considering the business being analyzed as well as some potential strategies that could prevent these occurrences.
Is the norm abnormal?
One fraud test that is commonly applied to payments is a test for weekend activity. Because corporations are generally not inclined to pay overtime for their accounts payable department, it is reasonable to conclude that payments recorded on a weekend date should be regarded as suspect.
A conclusion that is often reached in these scenarios is to throw out the test. However, as we looked more closely at the individual source systems, it made sense to include the tests for the disbursement systems that were based in the States rather than in-theater. By taking that extra step of investigation, we applied the test where it provided value, and ultimately delivered a more meaningful output to our client to review for potential fraud.
Are the data appropriate for the test?
Another familiar test involves a mathematical concept known as Benford’s Law. Simply put, Benford’s Law states that in a set of naturally occurring numbers, the leading digit is distributed in a specific, non-uniform way, with a greater likelihood of occurrence for the smaller numbers (1 or 2) than larger (8 or 9). Without going into detail as to why this is (contact me if you’d like an explanation), I can say that it bears out in many circumstances relating to expenses.
Recently, however, we had a client for whom there was a spike in their data for expenses with a 6 as the leading digit. In a strict application of the test, this would be a flag for fraud. But we looked at some of the output and found that the company’s cell phone packages fell into the range of $60.00-$69.99 and were skewing the outcomes. Again, we did not dispense with the test altogether, but we did pull out the cell phone expense records, and the resulting findings provided a more actionable set of results.
There’s a lot of enthusiasm at the moment for business analytics, and rightly so. There is increased access to more data and more processing power than ever before. As a result, it is feasible to perform tests that would never have been possible in previous decades. To make effective use of the unprecedented data and computational systems available, analysts need to ensure that communication and contextualization are essential components of their process. They need to understand client operations and design tests to fit not only the data but also the actual business that the data reflects. Doing so will lead to more meaningful and actionable results for our clients, which is critical as we continually seek to optimize the value provided by business analytics.