Industry Thoughts

How we continuously update our fraud detectors

November 14, 2023

minute read

  • Louise O'Connor
    Machine Learning Engineer

Fraudsters are always evolving their tactics.

Which is why or team at Inscribe works tirelessly to maintain a solution that can quickly and accurately identify signs of document fraud

When trying to maintain any leadership status in the market, however, many companies get caught in a loop of continuously releasing new features for their customers, sometimes even getting caught up in what industry leaders call “Featuritis” or “Feature Bloat.”

Continuously coming up with new features can be exciting — but not always effective for fraud detection. Fraud detectors and machine learning models inevitably degrade over time. Why? Because data changes over time (and sometimes very quickly). So if a model was trained a few years ago, it may have been trained on different data than the data that’s currently coming into our system. 

In addition, fraud techniques and methods are constantly evolving, which means that the data models and features that work today may not be enough to protect companies in the future. This is why we continuously evaluate our machine learning models and update or refine our existing features (instead of just focusing on releasing new features), as well as extending them to be even more powerful. 

Keep reading for a peek behind the scenes on some of our machine learning operations for monitoring fraud detection performance. 

How we operationalize continuous improvement for fraud detectors

Monitor false positive and false negative rates

When you work in fraud detection, it’s not enough to just release a new feature. We constantly have to be monitoring performance and reviewing for false positives or false negatives. That’s why we’re always keeping an eye on detector performance and identifying where updates need to be made. 

We use a number of dashboards to monitor performance and keep a close eye on any spikes in false positive or false negative rates. If we do see a spike, we’ll dig in. Sometimes it’s as simple as retraining our models on updated fonts in a particular insititution’s bank statements. Other times, it’s more complex and reveals a new type of document fraud that’s showing up in our data.

One example of why monitoring for false positives or false negatives is so important would be with our blocklist feature, which allows our companies to add fraudulent names and addresses. If those same names and addresses are used again, Inscribe can  automatically reject that document.

But automatically blocking customers can create false positives and inhibit growth. That’s why we’ve worked so hard to fine-tune our blocklist feature. Without that, a basic blocklist feature might inaccurately turn down customers with similar names. For example, if my name (“Louise O’Connor”) was blocklisted, a poorly built blocklist feature might reject anyone named Lou O’Connor or Laura O’Connor as well.  

Re-examine detector rules and results 

Another way we constantly help teams improve their performance is by tracking specific detectors and determining possible combinations between them that can help improve decision-making accuracy. For example, if a document contains a mismatched date, as well as suspicious metadata, that would be a higher indication of fraudulent activity than either of those signals alone.

But by cross-referencing the two rules, we’re able to help our customers avoid time-consuming manual investigations and arm themselves with clear and compelling data to substantiate their decision-making.

Align with changing industry and customer needs

One of the big trends we’re seeing right now is an increase in image submissions. 

For example, customers take a screenshot of a bank statement from their phone and submit a JPG or PNG file instead of a PDF provided by their bank. From the customer’s point of view, the format is inconsequential. All they care about is providing the required information. But for our fraud detection software, and many other tools on the market, that’s not really the case. And while it’s easier to detect fraud in PDF documents, we’ve built (and continue to build) detectors that make it possible to detect fraud in images as well to address market demand. 

Our team recognizes that image submissions make for a better, faster, simpler experience for our customers’ customers. So one of our priorities is to continuously increase the capabilities of our image tool and ensure its Trust Score results approach the same level as our flagship PDF tool. 

Thinking of customer needs more broadly, we regularly solicit feedback from our customers so that we understand the challenges they are facing and the capabilities that will help them solve those issues. In this way, our engineering team takes a practical approach to development, building and enhancing our tool to meet their real needs today and tomorrow.  

Fighting fraud – accurately, at speed, and at scale with Inscribe 

In an ever-evolving landscape, our team is constantly striving to help our customers detect document fraud more quickly and with a high degree of accuracy, so that they can reduce fraud and credit losses and improve the performance of their risk management team. 

Want to see what Inscribe can do to help your organization? Schedule a demo of our product today and learn how we can help you automate manual processes, improve fraud detection, and start approving more customers with confidence.

Ready to get started?

See how Inscribe can help you reduce risk and grow revenue.