Annual Report
2026 State of

Document Fraud

How is AI changing document fraud? We analyzed millions of documents and interviewed 15 practitioners to find out.

6%

Documents flagged
as fraudulent

5x

Increase in AI-generated fraud (Apr–Dec 2025)

98%

Of fraud leaders concerned about AI-enabled fraud

Introduction

Document fraud has existed as long as documents have been used to establish trust

Fraudsters have always sought to exploit the gap between what a document claims and what is actually true.

In the 1920s, Victor Lustig twice convinced scrap metal dealers to purchase the Eiffel Tower by posing as a government official. He forged ministry stationery, rented a suite at the Hôtel de Crillon to receive meetings, and presented fabricated credentials authorizing him to sell the tower for scrap. His first victim was so embarrassed at being conned that he never reported the crime, allowing Lustig to attempt the same scheme a second time.

What makes this moment different is the scale and speed at which that exploitation can happen.

When I started in fraud back in the 1990s, fraud was really pretty straightforward and simple. There were only like two types of fraud. There was credit card fraud, if you had a credit card, and there was like check fraud if you had a bank card. But what happened over the years is banks started to release more products. During that time, fraud got a lot more complicated. But the tech also improved pretty dramatically.

— Frank McKenna, Chief Fraud Strategist, Point Predictive

Today, generative AI can produce a realistic pay stub in seconds. Template marketplaces sell editable bank statements for under ten dollars. A fraudster with no technical skills can purchase, customize, and submit convincing documentation without ever touching Photoshop.

And it is working. In 2025, Inscribe flagged approximately 6% of all documents processed across our network as fraudulent. That is roughly one in sixteen documents showing signs of manipulation, fabrication, or misrepresentation.

This report synthesizes what we learned in 2025 from three sources: detection data from the Inscribe network spanning millions of documents across banks, credit unions, fintechs, and lenders; a survey of 90 fraud and risk leaders conducted in November and December 2025; and interviews with practitioners including senior underwriters, chief risk officers, fraud managers, and industry experts.

The findings point in a clear direction. Document fraud is accelerating. Manual review is reaching its limits. And organizations that adapt will pull ahead of those that do not.

But this is a evolution, not defeat. The same AI capabilities fraudsters use to create convincing fakes can be deployed to detect them. The fraud fighters we interviewed are not discouraged. They are adopting new tools, sharing intelligence across institutions, and rethinking workflows that have not changed in decades.

This report is designed to help fraud fighters and risk leaders understand the landscape, benchmark your approach, and identify opportunities to strengthen your defenses.

Let's start with what the data tells us.

01

By the Numbers

Every fraud strategy starts with understanding the threat. This section draws on Inscribe's year-to-date network data spanning millions of documents processed across hundreds of financial institutions, combined with survey responses from 90 fraud and risk leaders.

Together, they reveal where document fraud is concentrated, which document types carry the highest risk, and where fraudsters are finding soft spots in verification workflows.

The scale and shape of document fraud in 2025

Across our network in 2025, approximately 6% of all documents processed were flagged as fraudulent. That translates to roughly one in sixteen documents showing indicators of manipulation, fabrication, or misrepresentation.

To put that in context: if your organization processes 10,000 loan applications per year and each application includes three documents, you are potentially looking at over 1,800 fraudulent documents annually. Some will be obvious. Many will not.

For investigators who work these cases daily, the volume is relentless. Las Vegas Financial Crimes Detective Marc Evans Mark Evans spent six years in financial crimes before moving to cyber crimes, and document fraud was a constant.

I would see document fraud every single day. I'm talking about fake DMV temporary passes, DMV titles, fake treasury bonds that were completely made from scratch. I actually caught a guy one time—when we got him in custody, he was in the middle of making a fake treasury check, and it was still up on his computer screen in Photoshop.

— Marc Evans, Las Vegas Financial Crimes Detective

The cases Evans describes are not outliers. They reflect the industrialization of document fraud that our network data confirms.

The documents are just so much better looking now than they used to be. You can't necessarily tell if the spacing is off because thanks to AI and some of these other tools, the documents are just so much better looking now.

— Timothy O'Rear, Senior Underwriter, Rapid Finance

This tracks with what we have heard across the industry. Frank McKenna, Chief Fraud Strategist at Point Predictive and author of the Frank on Fraud blog, has watched fraud evolve for three decades. He describes fraudsters as increasingly sophisticated in exploiting multiple channels and document types simultaneously.

As document quality improves, the challenge shifts from just a detection problem to an operational burden. When fraudulent and legitimate documents are indistinguishable at first glance, review teams face higher workloads, longer queues, and increased risk of both missed fraud and unnecessary friction for good customers.

Fraud pressure is broadly distributed across document types

When we examine fraud rates by document type, a clear pattern emerges: most documents used to verify critical facts exhibit a similar baseline fraud rate, generally in the 4–7% range.

Bank statements, pay stubs, tax forms, business filings, and other financial documents are all consistently targeted. This indicates that document fraud is not confined to a single document class, nor driven by one specific verification step. Instead, fraud pressure is broadly distributed across workflows wherever documents are used to establish trust.

This aligns closely with how fraud and risk leaders view the landscape. In our survey, respondents consistently cited core financial and income-related documents as the most vulnerable to manipulation (reflecting the high-stakes decisions those documents support).

One document type breaks from the baseline: utility bills, which show a significantly higher fraud rate than other document categories in Inscribe’s network.

This does not mean utility bills are uniquely dangerous. Instead, they sit at the intersection of identity, convenience, and perception.

Utility bills are commonly used to verify proof of address, often as a secondary or supporting document. Because an address underpins many downstream decisions, from account opening to credit approval, utility bills frequently serve as an early anchor in identity formation.

At the same time, altering a utility bill often feels less serious than altering a primary financial document, even though the legal and risk implications are the same. In many cases, the behavior is not overtly malicious. Applicants may have recently moved, lack updated documentation, or “fix” an address without recognizing the action as deception.

The intent varies, but the risk signal does not.

This distinction matters. Treating utility bills as low-risk or secondary documents can create blind spots in verification workflows. 

Applying consistent scrutiny across all documents used to verify critical facts helps protect institutions and customers, especially when individuals may unknowingly cross a line.

Bank statements top the list of fraud fighter concern

Our survey of 90 fraud and risk leaders asked which document types they believe are most vulnerable to manipulation.

In our survey, 85.56% of respondents cited bank statements as the document type they are most concerned about, the highest of any category.

Bank statements, especially, are very complicated because we had to parse transactions. We had developed a system in terms of checking, ‘Do you see a transaction from Fleetio?’ which is software that our legitimate customers use.” That was our way of analyzing these bank statements from a fraud and credit perspective. It was heavily manual, everything used to take an hour.

— Anurag Puranik, Chief Risk Officer, Coast

The challenge with bank statements is their complexity. Unlike a pay stub with a handful of fields, a bank statement contains dozens of transactions, running balances, dates, and formatting elements. That complexity creates more opportunities for subtle manipulation and more work for reviewers trying to catch it.

Pay stubs and business financial documents rank second and third, reflecting the income verification use case that drives much of document fraud in lending.

Detection challenges are multi-dimensional

Our survey asked fraud leaders about their biggest challenges in detecting document fraud. The results reveal a problem that spans technology, process, and resources.

The top challenge, cited by 72% of respondents, is detecting subtle AI-driven changes. Angela Diaz sees this as a shift that requires fraud teams to think differently about quality improvements that may not be visible.

Five years ago, ten years ago, you could spot a fake passport or a fake driver's license that was sent in. It looked so digitally perfect that you're like, get out of here. Or it was a bad Photoshop job. I think a lot of the phishing emails and phishing texts were easy to spot five years ago, ten years ago. There were major quality issues, grammar, spelling errors, just low quality logos. But we're going to see that quality go up with AI.

— Angela Diaz, Senior Principal of Operational Risk Management, Discover

The second and third challenges, time-consuming manual review (63%) and limited visibility into edits (62%), are related. Manual review takes time precisely because reviewers lack visibility into what has been changed. They are looking for anomalies without knowing where to look.

Notably, only 13% of respondents said they are unsure what signals to look for. Fraud teams know what they are looking for. The problem is that the signals are increasingly invisible to manual inspection.

Key Takeaways:

By the Numbers

Document fraud operates at scale

With approximately 6% of documents flagged across Inscribe’s network, fraud is a volume and operational challenge, not an edge case.

Fraud pressure is widespread

Most document types used in verification show similar baseline fraud rates, underscoring that no single document class is immune.

Utility bills warrant attention

Higher fraud rates reflect their role in identity verification and lower perceived severity — not inherently higher risk.

Intent is not binary

Many instances of document manipulation are non-malicious, but they still introduce real risk into decisioning systems.

Consistency matters

Applying uniform verification standards across document types reduces blind spots and improves outcomes for institutions and customers alike.

In the next section, we explore how generative AI is accelerating document fraud and what that means for detection strategies.

02

The AI Arms Race

How generative AI is changing the game for both sides

The fraud landscape has always been a cat and mouse game. Fraudsters find vulnerabilities. Institutions patch them. Fraudsters adapt. The cycle continues.

What makes this moment different is the speed of that cycle. Generative AI has compressed the timeline for creating convincing fraudulent documents from hours to seconds. Tools that once required technical skill are now accessible to anyone with an internet connection. And the quality of the output is improving faster than many detection systems can keep up.

This section examines how AI is reshaping document fraud from both sides: how fraudsters are using it to scale attacks, and how fraud teams are deploying it to fight back.

The rise of AI-generated documents & deepfake fraud

AI-generated document fraud is no longer theoretical. It is a distinct, measurable, and accelerating threat and one that is increasingly visible earlier in financial workflows.

Across Inscribe’s network, detected AI-generated document fraud increased nearly fivefold between April and December 2025. Interestingly, this growth was not linear. It accelerated in waves, reflecting how quickly new tools, templates, and techniques spread once they become accessible.

This trend builds on what we reported earlier in the year. In the first half of 2025, AI-generated and template-based document fraud increased 208% year over year. The second half of the year made the trajectory clearer: AI-driven document fraud is compounding, not plateauing.

This acceleration is reflected in how fraud teams view the risk. In our survey of 90 fraud and risk leaders:

  • 65.56% said they are very concerned about AI-generated or AI-edited document fraud
  • 32.22% said they are somewhat concerned
  • In total, 97.78% expressed concern about this threat vector

Only 2.22% reported being neutral or unconcerned, making AI-enabled document fraud one of the most consistently cited risks across the survey.

Angela Diaz, Senior Principal of Operational Risk Management at Discover, has watched this evolution closely. She believes the industry needs to prepare for AI-powered fraud, but cautions against panic.

We do need to be prepared for how fraudsters might utilize AI, but I think that we are much better served as fraud fighters focusing on how we can use AI to improve the foundational things that we have in place at our businesses, utilizing it to work smarter, not harder.

— Angela Diaz, Senior Principal of Operational Risk Management, Discover

This framing matters. AI is changing the economics of document fraud, not eliminating the need for strong fundamentals.

AI-generated documents are created entirely from scratch using image generation models. These documents never existed in legitimate form. A fraudster prompts an AI system to create a bank statement, pay stub, or utility bill, and the model produces something that looks convincing at first glance.

Law enforcement is seeing the same shift. Las Vegas Detective Marc Evans, who specializes in fraud and financial crimes investigations, recently tested one of Google's AI tools and was alarmed at how easily it could be weaponized.

I was playing around with [Gemini's Nano] the other day and I was like, I wonder what can I change with this thing? And I was shocked at how easy it is to change the document. All you do is put a simple, one sentence prompt into it and go change the name from this to this and it does it. This is just a whole new level of fraud that's going to come across with documentation fraud.

— Las Vegas Financial Crimes Detective Marc Evans

AI-edited documents start as real documents that fraudsters modify using generative tools to change specific fields: names, dates, account numbers, transaction amounts. Because most of the document is genuine, these modifications are harder to catch.

Can you spot the deepfake document out of the examples below? (All personally identifiable information is either shared with consent or entirely fabricated.)

The trick is that they’re all deepfakes (a mix of AI-edited and AI-generated).

Michael Coomer, Director of Fraud Management at BHG Financial, sees these types of deepfake documents as particularly dangerous because they exploit what appears to be legitimate.

It may not be direct attacks, it might not be changes to how bad actors are using AI to increase the sophistication of their attacks, but necessarily they're supporting their interactions with victims, particularly in a scam environment where they are creating more realistic identities. They're creating more realistic stories and this generation of essentially posturing and publicity.

— Michael Coomer, Director of Fraud Management, BHG Financial

AI is also lowering the barrier to entry in quieter ways. Fraudsters do not always need to generate documents outright. Large language models can simply teach users how to alter documents, walking them through tools like Photoshop step by step.

The result is not just more fraud, but earlier fraud. AI-driven document manipulation increasingly appears during onboarding, underwriting, and compliance checks (often before traditional behavioral or transactional signals fire).

This shift has important implications for detection strategies. As AI-generated and AI-edited documents become more common, institutions that rely solely on downstream signals may find themselves reacting too late.

The document template store economy

Despite advances in generative AI, the path of least resistance for most fraudsters remains surprisingly low-tech: purchasing pre-made templates from online marketplaces.

A growing ecosystem of websites now sells editable document templates (bank statements, pay stubs, utility bills, tax forms) for as little as $10. These sites operate openly on the public web, positioning themselves as providers of "novelty" or "replacement" documents while offering exactly what a fraudster needs.

One UK-based site, Replace Your Documents, illustrates the model. The homepage promises “high quality novel documents” using “the very latest printing technology.” The product catalog includes:

  • Bank statements from “all major UK banks”
  • Utility bills including HMRC tax codes, P45s, P60s, and SA302s
  • Payslips printed on “high quality Sage payslip paper” with figures “calculated based on your net/gross salary”
  • Custom document editing where customers provide details and the site edits existing documents
  • Custom templates built to order within 3 days

The site offers same-day delivery of digital PDFs, printed copies within 2-3 days, and “unlimited modifications and changes until you're satisfied.”

The legal disclaimer is telling. The site states that “providing inaccurate or incorrect information that deliberately misleads others is committing fraud” and that products are “for theatrical, educational or novelty purposes only.” But the product descriptions — “ideal for your novelty proof of address needs,” “ideal as a novelty proof of income documents” — make the intended use clear.

This is not an isolated example, given a similar website exists like Replace Your Docs and TemplateLab. Furthermore, Inscribe's research into fraud-related search terms found significant volume for fraud-related keywords:

These searches represent demand. And where there is demand, supply follows.

Marketplaces now sell thousands of ready-to-go templates for under ten dollars. It's cheap, it's fast, and it's improving.

— Ronan Burke, CEO, Inscribe

The economics are straightforward. A fraudster can purchase an editable bank statement template, customize it with a target's information, and submit it as part of a loan application. The investment is minimal. The potential return, if the loan is approved, is substantial.

Hailey Windham, founder of CU Fight Fraud and host of the Fraudology podcast, points out that fraudsters will always take the easiest path available.

Fraudsters are going to do the easiest thing that helps them swindle you. If you just make it very cost prohibitive for them, they will go away. They might tackle somebody else, but they are not coming after you.

— Hailey Windham, Founder, CU Fight Fraud

The template economy also explains a pattern we see in our detection data. Many fraudulent documents are not sophisticated. They use the same templates, the same formatting errors, the same telltale signs. Fraudsters test which templates pass verification systems, then scale what works.

The combination of template stores, document “generators,” and AI editing tools creates an accessible fraud toolkit. A fraudster can buy a template, use AI to customize specific fields, and produce a document that is neither fully synthetic nor simply a purchased fake. It is a hybrid that requires detection systems capable of catching both patterns.

Ai tactics fraudsters are using

Document fraud does not happen in isolation. It is typically part of a broader scheme that includes identity fraud, application fraud, and social engineering. And AI is making all of those components more convincing.

What I have noticed recently is when we pick up the calls and call these people, the signal has just vanished. Fraudsters can type into ChatGPT and say 'I am a business running this and give me a script to talk to a customer service agent' and you can have a very detailed nice script that makes you sound really legitimate and knowledgeable. Our mind was blown.

— Anurag Puranik, Chief Risk Officer, Coast

This erosion of traditional signals extends beyond phone calls. Fraudsters are using AI to:

  • Generate realistic business websites in minutes using no-code tools
  • Create professional-looking email domains that mimic legitimate businesses
  • Produce consistent narratives across application forms, documents, and verbal verification
  • Research and incorporate industry-specific terminology that makes their stories more believable

Patrick Lord at Rapid Finance has seen fraudsters get creative with web presence in ways that are easy to overlook.

I can go buy ‘ABC Solutionss’ with two S's at the end. And you either have it forwarded or it looks similar enough to ‘ABC Solutions’ with one S at the end. It can be really easy to overlook something like that. You have to make sure there's the proper checks that go into it. You're actually checking out that domain and maybe having some automated checks and occasional human checks behind it.

— Patrick Lord, Senior Project Manager, Rapid Finance

You can spin up a website that makes your operation seem so legitimate with real images in a matter of minutes to hours. That signal is gone. We have to start digging into the metadata. How long has the registration been? Is this a recently registered website?

— Anurag Puranik, Chief Risk Officer, Coast

The implication is that document verification cannot operate in a silo. A convincing document submitted alongside a convincing website, a convincing phone presence, and a convincing application creates a reinforcing web of apparent legitimacy. Detection systems need to evaluate documents in context, not in isolation.

Jen Lamont, BSA Compliance Officer and Fraud Manager at America's Credit Union, emphasizes the importance of looking beyond the surface.

We're making sure that phone number is tied to the proper member, to the proper business. We're making sure that the emails seem to make sense. Sometimes we'll get emails that appear to belong to someone else. We're doing Google searches, making sure that we can find the business if it's applicable on Google. Restaurants, clothing stores, that sort of thing, those kinds of businesses should have web presence.

— Jen Lamont, BSA Compliance Officer & Fraud Manager, America's Credit Union

Key Takeaways:

The AI Arms Race

AI is accelerating fraud, not changing its goal

Generative AI compresses the time and effort required to produce convincing fraudulent documents, pushing fraud earlier into onboarding and underwriting workflows.

Fraud has been productized

Template marketplaces and AI editing tools have turned document fraud into a low-cost, scalable service — requiring little skill and minimal investment.

Traditional signals are weakening

Documents, websites, phone interactions, and application narratives are now easier to fabricate and coordinate, making isolated document checks insufficient.

Fraud leaders see the shift clearly

Nearly all surveyed risk leaders express concern about AI-enabled document fraud, signaling broad recognition that existing controls must evolve.

AI is also the advantage

The same technologies enabling fraud can strengthen detection — powering faster analysis, rule creation, and cross-signal verification.

03

The Cost of Inaction

Why legacy approaches cost more than ever

The previous section examined how AI is changing the fraud landscape. This section looks at what happens when organizations do not adapt.

The costs are not abstract. They show up in operational bottlenecks, missed fraud categories, and customer attrition. They show up in fraud teams working until midnight to clear backlogs. They show up in good customers who take their business elsewhere because decisions take too long.

For many institutions, the breaking point is not a single catastrophic fraud event. It is the slow accumulation of inefficiencies that compound over time until the gap between what the team can handle and what the business requires becomes unsustainable.

If somebody exploits a loophole in your system, all the fraudsters in the world are instantly going to know about it. And they're all attacking you within a matter of days to weeks, not months. Your time to reaction has to be really fast. I have known stories of people losing millions of dollars in a matter of a week where something very obvious was just wrong.

— Anurag Puranik, Chief Risk Officer, Coast

This section examines three dimensions of the cost of inaction: the operational burden of manual review, the hidden problem of first-party fraud, and the competitive disadvantage of slow decision-making.

Manual review breaks with scale

Before automated document verification, fraud teams relied on human reviewers to examine every document that came through the door. For many institutions, that is still the reality.

At that time, we didn't necessarily have an OCR system. So every single document that was coming through to us, and I mean these were thousands of documents a day, were reviewed by a human pair of eyes. I have memories of us being super busy some nights and myself and a couple of coworkers staying up till 11, midnight just trying to get through the documents manually.

— Timothy O'Rear, Senior Underwriter, Rapid Finance

The time required for thorough manual review is substantial. Before implementing automated verification, some institutions reported spending 60 to 90 minutes per application on document review alone.

It used to take like an hour to one and a half hours just to do one customer. Most of them today are automated.

— Anurag Puranik, Chief Risk Officer, Coast

Jorge Cortes saw the same pattern at Kinecta, where investigators were managing 30 to 40 cases per month while relying on manual methods to verify documents.

Before Inscribe, we had to rely on reference documents or validate information over the phone. It was time-intensive and inefficient.

— Jorge Cortes, Vice President, Enterprise Risk Management, Kinecta

That time adds up. If your team processes 50 applications per day and each one takes an hour of document review, you are burning 50 hours of analyst time daily just on documents. That is more than six full-time employees dedicated solely to looking at documents.

Patrick Lord describes the manual processes that Rapid Finance used before adopting automated tools.

It used to be really, really difficult to catch fraud. I remember training underwriters about a decade ago and we would just have these custom spreadsheets that we'd have to spot check some math occasionally. And we all had these little workflows where we would open up all the three documents next to each other, the four documents next to each other, and just compare the beginning and ending balances. And it was like this whole mental calculus that went on.

— Patrick Lord, Senior Project Manager, Rapid Finance

Those workflows worked when fraudsters were using basic image editing tools. They do not work when fraudsters are using AI to ensure the math adds up, the formatting is consistent, and the metadata looks clean.

Hailey Windham puts it simply: manual review alone is no longer sufficient.

You cannot replace the fraud fighter with tech, and the fraud fighter in today's world can't function without the tech. I think that you have to have both in order to have a truly effective approach to fighting fraud.

— Hailey Windham, Founder, CU Fight Fraud

First-party fraud goes underreported

When a document shows manipulation of income figures, account balances, or transaction history, but the name and address are genuine, that is often a signal of first-party fraud. The applicant is real, but they are altering their finances.

The data suggests this is more common than many organizations realize.

What we found is that about 40% of all documents that are altered have only the financial details altered. Another 40% has both, and only 20% show identity details altered.

— Brianna Valleskey, Head of Marketing, Inscribe

Michael Coomer points out that one reason first-party fraud goes under-reported is many organizations are reluctant to classify this behavior as fraud.

There is a requirement for us as financial professionals to reconsider and reframe our assumptions and hesitancies around what we classify as good customers. I think there's a lot of unwillingness to say that first-party fraud necessarily exists within your given customer base.

— Michael Coomer, Director of Fraud Management, BHG Financial

Michael Coomer points out Angela Diaz encountered this reluctance firsthand when speaking at industry events.that one reason first-party fraud goes under-reported is many organizations are reluctant to classify this behavior as fraud.

What was eye-opening was after I spoke on a Fraud Fight Club panel, when I stepped off the stage, how many fraud leaders came up to me being very open and honest about, ‘I don't even know my first party fraud numbers. I haven't even been tracking that. I don't even have the data for that.’ I had one industry leader that told me that that was the largest pool of fraud for them once they actually dug in to being able to track it.

— Angela Diaz, Senior Principal of Operational Risk Management, Discover

Michael Coomer describes how first-party fraudsters rationalize their behavior in ways that make detection harder.

They rationalize to themselves that it's okay to add $300,000 to your daily balances. That's just, I have the money available to me at a different time. It's just, I took a downturn for this month. So I just want to, we're just going to round up by a few hundred thousand dollars. It's totally reasonable, right? You can follow any thread on Reddit when you look at people rationalizing the behaviors and the activity.

— Michael Coomer, Director of Fraud Management, BHG Financial

Angela Diaz recommends that organizations develop clear policies for handling customers who cross the line.

Just have an exit customer strategy. That's my risk management tip of the day. If you don't have an exit customer strategy documented that is based on analysis of customer behavior, you need to get one documented so that you can take action on that.

— Angela Diaz, Senior Principal of Operational Risk Management, Discover

The distinction between first-party and third-party fraud matters for detection strategy. Third-party fraud often involves identity signals: mismatched names, addresses that do not exist, phone numbers that cannot be verified. First-party fraud may pass all of those checks.

Organizations that can’t track first-party fraud may be missing a significant portion of their fraud problem.

Speed-to-decision as a competitive advantage

Document verification does not happen in a vacuum. It is one step in a customer journey that includes application, underwriting, approval, and funding. When that step takes too long, the entire journey suffers.

For lenders especially, speed matters. Good customers have options. If your process takes days while a competitor takes hours, you will lose business.

The big selling point for a lot of our financing is that we can get the money into clients' accounts quickly within 24 to 48 hours. Well, if you're taking 24 to 48 hours to even look at their bank statements before you can even get a quote out to them, you're basically guaranteed to lose that client.

— Timothy O'Rear, Senior Underwriter, Rapid Finance

This dynamic creates adverse selection. The customers who are willing to wait are often the ones who have no other options. The best customers, the ones with strong credit and legitimate documentation, go elsewhere.

Coast has built their process around speed, with a goal of deciding 90% of applications the same day they are submitted.

We have an SLA of 90% of apps decided in the same day and we are able to hit that.

— Anurag Puranik, Chief Risk Officer, Coast

That speed is possible because document verification is automated. When a customer uploads bank statements, the documents are analyzed in seconds and a trust score is returned. If everything checks out, the application moves forward without human intervention. If something is flagged, a human reviewer steps in with all the context they need to make a quick decision.

As soon as the customer uploads it, it pushes through Alloy, gets passed through Inscribe, Inscribe sends back a response directly to Alloy. It flows back to our system and we show to a customer saying like, you're approved. If the revenue checks out, if the revenue does not check out, it goes to an underwriter to see what the issue was.

— Anurag Puranik, Chief Risk Officer, Coast

The result is a customer experience that feels seamless. Good customers get approved quickly. Suspicious applications get the scrutiny they deserve. And the fraud team is not buried in a backlog of routine reviews.

Rapid Finance saw similar improvements. Before automation, deals would stall in document review. After automation, the percentage of applications that reach final underwriting increased from around 50% to between 85% and 90%.

Every measurable internal metric that we have for that has improved drastically since 2020. I think Inscribe, along with a couple of others, just plays a huge part in that.

— Patrick Lord, Senior Project Manager, Rapid Finance

Speed also matters for fraud prevention. The faster you can identify a problem, the less damage it can do.

Being able to use Inscribe to kind of foreshadow a couple of things ahead of time is huge in terms of the trust in the process. There will be plenty of times where we'll have bank statements that come through where we might not necessarily be 100% sure that there's some sort of fraud going on, but we'll foreshadow that broker partner. You're welcome to submit this deal. We'll take a look at it. But based on the results that we're seeing, we may need to do another layer of bank verification here.

— Timothy O'Rear, Senior Underwriter, Rapid Finance

That early warning allows the team to set expectations and avoid surprises later in the process. It also protects broker relationships by being transparent about potential issues upfront.

Jen Lamont sees speed as essential for serving members effectively.

It's so important for us to lean on automation and AI because our job, especially in the credit union space, is to help our members. One fraud situation can be a 45 minute conversation, an hour long conversation. If I don't have some automation on the back end in my system, how am I supposed to give my undivided attention to our members? They need the emotional support.

— Jen Lamont, BSA Compliance Officer & Fraud Manager, America's Credit Union

When routine document checks are automated, fraud analysts can focus on what humans do best: having difficult conversations, supporting victims, and investigating complex cases that require judgment and empathy.

The compound effect of inaction

The costs described in this section have compounding effects.

Manual review creates backlogs. Backlogs slow down decisions. Slow decisions drive away good customers. Fewer good customers means a riskier portfolio. A riskier portfolio means more fraud. More fraud means more manual review.

The cycle feeds itself.

Organizations that break the cycle by automating routine verification, tracking first-party fraud, and prioritizing speed find themselves in a different position. Their teams have capacity to focus on complex cases. Their customers get faster answers. Their portfolios skew toward lower-risk applicants.

In a way it almost ends up turning to revenue protection. When I look at a program like Inscribe, generally speaking, you catch one or two big deals of fraud that you wouldn't have caught, it pays for itself. Everything else after that, it's like you are protecting your revenue and growing it in a way where you can trust it.

— Patrick Lord, Senior Project Manager, Rapid Finance

The organizations that have broken this cycle are seeing measurable returns. Matt Overin at Logix Federal Credit Union frames the ROI in concrete terms.

We started using Inscribe in late April last year. And in just eight months, we saw potential loan fraud savings of over $3 million and countless ID theft saves.

— Matt Overin, Manager of Risk Management, Logix Federal Credit Union

Hailey Windham frames it as a mindset shift.

We have to stop it together. Financial institutions, we all need to come together and have that same approach. It's consistent across the board. And the individual doesn't feel attacked. They don't feel like ‘this institution's picking on me, but this other institution allowed it.’ We have to stop it together.

— Hailey Windham, Founder, CU Fight Fraud

The cost of inaction is not just the fraud you miss. It is the compounding effect of a system that cannot keep up with the threats it faces.

Key Takeaways:

The Cost of Inaction

Manual review breaks at scale

Institutions that relied on human reviewers report spending 60 to 90 minutes per application on document review. At volume, that is unsustainable.

Inconsistency is a vulnerability

When detection depends on individual skill and attention, outcomes vary. Fraud that slips through is often a function of who reviewed the file that day.

First-party fraud goes underreported

Approximately 40% of altered documents show only financial details modified, suggesting the applicant is real but misrepresenting their situation. Many organizations do not track this category at all.

Slow decisions cost good customers

Applicants with strong credit and legitimate documentation have options. If your process takes days while competitors take hours, you will lose business to them.

Speed enables better fraud prevention

Early identification of potential issues allows teams to set expectations, request additional verification, and avoid surprises later in the process.

The costs compound

Manual review creates backlogs, which slow decisions, which drive away good customers, which concentrates risk, which creates more fraud. Organizations that break this cycle gain compounding advantages.

In the final section, we examine what the most effective fraud teams are doing differently: the human-AI partnership, layered defense strategies, and the power of industry collaboration.

04

Winning Strategies

What the most effective fraud teams are doing differently

The previous sections painted a challenging picture: document fraud is growing, AI is making fakes harder to detect, and manual processes cannot keep up. But the fraud fighters we interviewed are adapting.

This section examines what separates effective fraud programs from those that are falling behind. Three themes emerged consistently across our interviews and survey data: the importance of human-AI partnership, the necessity of layered defenses, and the power of industry collaboration.

These organizations are successfully managing document fraud at scale.

Human-AI collaboration becomes essential

The most effective fraud teams are not choosing between humans and AI. They are combining them.

AI handles what machines do well: processing large volumes of documents, detecting patterns across millions of data points, and flagging anomalies that would be invisible to human review. Humans handle what people do well: exercising judgment in ambiguous situations, building relationships with customers, and adapting to novel fraud schemes that do not match existing patterns.

Frank McKenna sees AI as a tool that elevates what fraud analysts can accomplish.

The way I think about us moving forward is everybody's going to be a manager of AI. And you could have one person have 10 or 15 or 20 PhD level experts who can work very quickly and do things for you. Those are basically AI agents.

— Frank McKenna, Chief Fraud Strategist, Point Predictive

This shift changes what it means to be a fraud analyst. Instead of spending hours on manual document review, analysts become orchestrators of AI systems, stepping in when judgment is required and letting automation handle the rest.

I don't think AI is going to replace fraud analysts at all. I think it'll change what fraud analysts do. I think it'll enhance what fraud analysts do. But I think that you're always going to have the human in the loop.

— Frank McKenna, Chief Fraud Strategist, Point Predictive

The fraud fighters who have deployed AI for document fraud detection are seeing measurable results. BCU prevented $5.6 million in losses from confirmed altered documents in just the first nine months of 2025. Logix Federal Credit Union prevented $3 million in potential fraud losses in eight months. Kinecta saved $850,000 in fraud losses while reducing document review time by 99%.

Some of our largest dollar preventions in the past few years have come directly from Inscribe detections. We're talking millions in losses prevented, and that's made a measurable difference in how fast and how confidently we can stop fraud.

— Nickie Christianson, Senior Manager, Account Protection Team, BCU

The goal is not to remove humans from the process. It is to free them from routine work so they can focus on what matters most.

Human judgement, however, still plays a critical role. Hailey Windham points to something machines cannot replicate.

AI will never have a gut. And the fraud fighters trust their gut. They have this spidey sense. Something that you won't get an alert for, but something just doesn't feel right.

— Hailey Windham, Founder, CU Fight Fraud

Jen Lamont agrees, adding another dimension that AI lacks.

AI is never gonna have empathy. We need both in order to have a truly effective approach to fighting fraud. It has to be a dynamic approach.

— Jen Lamont, BSA Compliance Officer & Fraud Manager, America's Credit Union

The winning formula is not AI alone. It is AI integrated thoughtfully into a broader fraud program, with humans providing oversight, judgment, and the relationship skills that machines cannot offer.

AI detection method being adopted by fraud fighters 

The same AI capabilities that make fraud easier to commit can also make it easier to detect. And the fraud fighters we spoke with are not waiting on the sidelines—they are actively adopting these tools.

The stereotype of the risk officer as a conservative gatekeeper is fading. However, a new generation of risk leaders is leaning into technology:

There's a theme amongst the perception of a risk officer that risk officers typically are a little behind the times and maybe don't lean into tools. The reality is that risk officers now tend to be much more proactive. We see the opportunity to say, how do we use modern technology to potentially offer an opportunity to advance ourselves and learn different things?

— Michelle Prohaska, Chief Banking and Risk Officer, Nymbus

Frank McKenna, who has tracked fraud technology evolution for three decades, sees AI as a force multiplier for fraud teams.

What I've found is you can turn every fraud analyst into a rule writer. Rule writing used to be something that a Python programmer would have to do. You'd have to go to your IT team. But with generative AI, everybody can be a coder because you can use the English language. You can merely state in a prompt what you want, and if you give it enough information about the structure of the database, it can write the Python code for you.

— Frank McKenna, Chief Fraud Strategist, Point Predictive

This democratization of technical capability matters. Fraud teams have historically been constrained by their access to engineering resources. If an analyst spots a new pattern, translating that insight into a detection rule requires developer time. With AI, that translation can happen in minutes.

McKenna describes using Claude's analysis features to process years of FinCEN data in under 90 seconds, producing charts, trend analysis, and written insights that would have taken days to compile manually.

You, in about 60 seconds, get something that probably would have taken one or two days for a person to do. Not necessarily that you take this and just publish it out there, but you can use it as a basis of understanding and then massage it into whatever analysis you're going to do.

— Frank McKenna, Chief Fraud Strategist, Point Predictive

Similarly, at Inscribe, we have been building agentic AI into our product that can perform tasks that previously required human review: explaining fraud signals in plain language, corroborating information across documents and applications, and conducting web research to verify business legitimacy.

AI Agents can reason across multiple layers. They can spot mismatched number formats, recycled templates, or income claims that don't add up—things a rules-based system or even a trained analyst might miss.

— Ronan Burke, CEO, Inscribe

The arms race is real, but fraud teams are not unarmed.

“Swiss cheese strategy” meets fraud detection

The organizations with the strongest fraud programs use multiple layers of detection, each designed to catch what the others might miss. Security professionals call this the "Swiss cheese model."

Imagine each layer of defense as a slice of Swiss cheese: full of holes. No single slice stops everything. But stack enough slices together and the holes no longer align. A fraudster who slips through document verification might get flagged by device intelligence. One who passes identity checks might trigger a behavioral anomaly. The power is in the combination.

We use several systems and processes to vet our incoming members. When we get alerts from initial checks, that leads us down the path of requesting more documentation such as utility bills or lease agreements to verify proof of residence. We can stop the fraud from even getting in the front door.

— Matt Overin, Manager of Risk Management, Logix FCU

At Nymbus, layered detection includes behavioral signals captured before a transaction even occurs. Michelle Prohaska describes how their fraud interdiction partner adds a critical layer of visibility.

We use Datavisor to collect both biometric and device data about how people log in. It can detect bot activities like are you moving your mouse in the way that a human would, or the way that a bot would? Are you copying and pasting information? What's the speed at which you typed or responded to things? You can pick up a lot of information just from the start of an experience.

— Michelle Prohaska, Chief Banking and Risk Officer, Nymbus

BCU's team has used layered detection to uncover fraud rings that would have been invisible to single-point analysis. In one case, they used Inscribe's X-Ray feature to connect multiple members tied to the same address, revealing that a document was originally owned by a blacklisted member with roughly $100,000 in charged-off loans.

In one case, a $75,000 auto loan was prevented after a bank statement was flagged for a fingerprint mismatch. That one signal changed the whole outcome of the case.

— Tyler Davenport, Investigator, Account Protection Team, BCU

Jorge Cortes, Vice President of Enterprise Risk Management at Kinecta, also points to X-ray as a key detection layer for his team.

The team absolutely loves the X-ray feature, especially with how convincing documents look now with AI. It helps us spot things we might otherwise miss, and it's certainly helped the team make better decisions.

— Jorge Cortes, Vice President, Enterprise Risk Management, Kinecta

Angela Diaz emphasizes that layering must be thoughtful, not just additive.

I think if we skip the part where we look at what we've done historically, what's in place internally and foundationally, and what tweaks may need to happen there in order to strengthen those controls, rules, alerts, models, we will not be actually solving this in the most effective way possible because we might be layering on tools that are great, but that don't align with the way that these scams are actually showing up for us. A third party vendor cannot make up for a lack of internal controls within any institution.

— Angela Diaz, Senior Principal of Operational Risk Management, Discover

The goal is to build a system where each layer reinforces the others and gaps in one are covered by another.

Collaboration and community act as force multipliers

Fraud are organized. Online communities dedicated to sharing fraud techniques have operated openly on social media platforms.

On Reddit, the subreddit r/IllegalLifeProTips amassed nearly one million members exchanging tips on everything from shoplifting to document fraud. While the community framed posts as "hypothetical" and "for entertainment purposes only," the advice was specific and actionable.

One user with 153 upvotes advised: "Take your real bank statements and use Adobe Acrobat to change the numbers." Another suggested: "Even better, log onto online banking and inspect element then take a screenshot!"

A third provided step-by-step instructions: "Download your most current [statement] and load it into Adobe Acrobat. Redact literally everything except your name, the dates, and your direct deposit that month, make sure you edit the amount to match your pay stub. Remember, fonts will [get] you. Use Matcherator or Font Squirrel to identify any fonts in the document and download and install them before you get started.”

Reddit has since banned r/IllegalLifeProTips for violating its rules against transactions involving prohibited goods or services. But the pattern is clear: when one community is shut down, others emerge. The knowledge has already spread.

The most effective fraud teams have learned to do the same.

As the fraud fighters, I'm seeing a ton of passion that's starting to come out. Fraud fighters that are not afraid to reach out anymore. There's so much crossover and networking in the community. You see people from one group in another group, and we're all fighting together because we know that's the only way we have any chance of winning this battle.

— Hailey Windham, Founder, CU Fight Fraud

Matt Overin describes a network of credit unions that share intelligence on a regular basis.

We are lucky to have a group of more than 50 local CUs that meet virtually on a monthly basis. We all go around the table and give examples of the fraud we are seeing real time and our mitigation efforts. That spawns offline conversations about vital vendors and software that makes our lives easier.

— Matt Overin, Manager of Risk Management, Logix FCU

This kind of sharing creates network effects. When one institution spots a new fraud pattern, others can defend against it before they are hit. A great example is the annual Fraud Round Table hosted by the account protection team at BCU.

The account protection team at BCU hosts an annual Fraud Round Table for hundreds of credits unions to come together and learn from one another. They celebrated their 11th year of the event in September of 2025 (pictured below) of hosting hundreds of fellow credit unions virtually and in-person.

Rapid Finance has built a culture of openness around fraud detection.

We openly talk about it. All of our sales reps, all of our underwriters, they've had sessions in terms of what these results mean. And we even have dedicated Slack channels that they can reach out to the experts on it. We're openly talking about it. We're not having to hide behind some sort of wall. We're willing to talk about fraud and the hits and the misses out in the open as an organization.

— Patrick Lord, Senior Project Manager, Rapid Finance

Beyond individual institutions, the fraud fighting community has built its own infrastructure for sharing knowledge. Frank McKenna's newsletter Frank on Fraud has become a go-to resource for staying current on emerging schemes and industry trends. Events like Fraud Fight Club from the team at About Fraud bring practitioners together for candid, off-the-record conversations about what's working and what isn't.

At the 2025 Fraud Fight Club in Charlotte, panels like "I Got 99 Problems & My Customer's One: First Party Fraud Unpacked" featured leaders from Point Predictive, First Citizens Bank, Navy Federal Credit Union, and Quavo sharing hard-won lessons on a main stage designed to look like a boxing ring (a fitting metaphor for the daily fight against fraud).

Inscribe's own podcast, Good Question, brings together fraud and risk leaders to discuss the big questions shaping the future of fraud, AI, and trust. Recent episodes have tackled topics like whether AI scams are the biggest fraud threat yet and how women leaders are transforming the risk landscape — featuring practitioners sharing unfiltered perspectives on what's working and what's next. (You can check out Good Question on Spotify and YouTube.)

These practitioner-to-practitioner exchanges give fraud fighters a space to speak openly about challenges, compare notes on detection strategies, and build the relationships that make real-time intelligence sharing possible.

Jen Lamont sees education and collaboration as inseparable.

I think it's so important that we stay up to date on what's going on, and that's not always possible in our industry because it's so fast paced. So having different team members following different groups and participating in different networking events and different conversations, and then coming back and sharing it with the team, it has made all the difference in the world to our investigation.

— Jen Lamont, BSA Compliance Officer & Fraud Manager, America's Credit Union

The fraud community has become a genuine community. Conferences, Slack groups, LinkedIn networks, and industry associations provide channels for sharing intelligence and best practices. Organizations that participate gain an advantage over those that operate in isolation.

The fraud community's willingness to share is itself a competitive advantage against fraudsters who also collaborate.

— Ronan Burke, CEO, Inscribe

Key Takeaways:

Winning Strategies

Combine AI and human judgment

AI handles volume, pattern detection, and routine verification. Humans handle judgment calls, customer relationships, and novel fraud schemes. Neither alone is sufficient.

Build layered defenses

Document verification is one layer. Identity verification, device intelligence, behavioral analytics, and network analysis each add signals. The power comes from combining them thoughtfully.

Integrate, do not accumulate

Adding tools without integration creates noise. The most effective programs connect their layers so that signals reinforce each other and gaps are covered.

Share intelligence across institutions

Fraudsters share tactics. Fraud fighters should too. Networks of institutions that share real-time intelligence can defend against emerging threats faster than those that operate alone.

Invest in education and community

The fraud landscape changes faster than any individual can track. Teams that distribute learning across members and participate in industry networks stay ahead of those that do not.

Create a culture of fraud prevention

Internal transparency about fraud patterns, hits, and misses creates a culture where problems are identified and addressed quickly. External transparency with partners builds trust and improves outcomes for everyone.

Conclusion

In 2025, we saw AI-generated and template-based document fraud increase by 208%. We saw utility bills emerge as the most frequently manipulated document type, with fraud rates nearly three times higher than bank statements. We saw fraud teams stretched thin by manual review processes that cannot scale to meet current volume.

We also saw organizations adapt. The fraud fighters we interviewed are not waiting for the problem to get worse. They are deploying AI to automate routine verification, building layered defenses that combine multiple detection methods, and sharing intelligence across institutional boundaries.

The path forward is not mysterious. It requires investment in technology, yes, but also investment in people and processes. It requires a willingness to measure attack vectors like first-party fraud, even when the numbers are uncomfortable. It requires speed, because good customers will not wait and slow decisions create adverse selection. And it requires collaboration, because no institution can see the full picture alone.

Fraud has moved from cut-and-paste to cut-and-code. The good news is that the same AI driving this threat can also be the solution. What matters now is how quickly institutions adopt systems that adapt as fast as fraud evolves.

— Ronan Burke, CEO, Inscribe

Methodology

This report draws on three primary sources:

Inscribe Network Data (January through November 2025): Detection data from millions of documents processed across hundreds of banks, credit unions, fintechs, and lenders using the Inscribe platform.

Inscribe Fraud Fighter Survey (November and December 2025): Survey responses from 90 fraud and risk leaders across the financial services industry.

Practitioner Interviews (2025): Conversations with fraud and risk leaders including:

  • Michelle Prohaska, Chief Banking and Risk Officer, Nymbus
  • Anurag Puranik, Chief Risk Officer, Coast
  • Patrick Lord, Senior Project Manager, Rapid Finance
  • Timothy O'Rear, Senior Underwriter, Rapid Finance
  • Angela Diaz, Senior Principal of Operational Risk Management, Discover
  • Michael Coomer, Director of Fraud Management, BHG Financial
  • Matt Overin, Manager of Risk Management, Logix Federal Credit Union
  • Frank McKenna, Chief Fraud Strategist, Point Predictive
  • Hailey Windham, Founder, CU Fight Fraud
  • Jen Lamont, BSA Compliance Officer & Fraud Manager, America's Credit Union
  • Nickie Christianson, Senior Manager, Account Protection Team, BCU
  • Samantha Burback, Investigator, Account Protection Team, BCU
  • Tyler Davenport, Investigator, Account Protection Team, BCU
  • Jorge Cortes, Vice President, Enterprise Risk Management, Kinecta

Additional data and analysis drawn from Inscribe blog posts and research published throughout 2025.

For more information about Inscribe's document fraud detection capabilities, visit inscribe.ai or contact our team to schedule a conversation.

What will our AI Agents find in your documents?

Start your free trial to catch more fraud, faster.

Join our email list for the latest risk trends, product updates, and more.