TikTok's Head of Payment Risk and Discover's former fraud oversight leader explore how digital platforms and banks now face the same fraud threats — and why traditional banks are rapidly becoming fintechs.
What does fraud look like when your platform processes tips to livestream creators? What about when your institution manages credit cards, personal loans, and home mortgages? And what happens when both environments face the same adversary armed with AI?
In this episode of Good Question, we brought together two risk leaders from very different corners of the financial ecosystem: Hua Li, who protects payments across TikTok's global marketplace, and Angela Diaz, who managed external fraud risk at Discover at the time of this recording (but now leads external fraud oversight at TD). Despite operating in distinct environments, their challenges are strikingly similar and their insights reveal why traditional banks are being forced to operate like digital-first fintech platforms — and vice versa.
The takeaway: the line between digital platforms and banks is dissolving. Whether you're protecting creator payouts or mortgage applications, the fundamentals of identity verification, document integrity, and behavioral analysis are converging… and so are the fraud risks.
TikTok operates as a marketplace where money flows in two directions. Audiences purchase virtual gifts to tip livestream hosts, and TikTok pays out earnings to creators through regional payout programs. That two-sided flow creates fraud exposure on both ends.
Hua described several attack vectors his team monitors daily:
His team's daily rhythm starts with dashboards tracking loss rates, approval rates, and risk-related declines across payment methods — essentially the same operational discipline you'd find at a traditional bank, applied to a 100% digital, global platform.
Angela outlined the fraud pressures facing large financial institutions that span credit cards, online banking, and lending. Several themes dominate banking leaders' current focus:
One of the richest parts of the conversation explored what's lost when fraud prevention moves from in-person to digital channels.
Angela painted a vivid picture: in a branch environment, a teller can hold a physical document, observe social cues, ask follow-up questions, and compare the person standing in front of them to the documentation they've provided. In a digital environment, all of that disappears. Institutions are left relying on cross-checks of data points they cannot physically see, from humans they cannot physically interact with.
The challenge is compounded by speed. Banks are designing their digital experiences to let legitimate customers open accounts and move money quickly because that's what competitive pressure demands. But every convenience introduced without corresponding risk controls creates an opportunity for exploitation.
Hua offered a complementary perspective. TikTok was 100% digital from day one and global from the start, meaning it had to solve for the hardest version of these problems immediately. His team's approach distills identity verification into two core questions: is this a real person (not a fake account), and is this actually the account owner (not an account takeover)?
To answer those questions at scale, TikTok relies heavily on behavioral data—how users register, log in, and interact with the platform over time. For high-value creator accounts, they layer on document verification and bank account name matching, but always balance friction against conversion. In some regions, a 100% name match requirement created 20–30% friction, forcing the team to explore fuzzy matching and third-party verification to find the right tradeoff.
Both guests confirmed that AI is actively helping fraudsters produce more convincing documents. Angela noted that fake passports and driver's licenses that would have been easy to identify five to ten years ago now mimic realistic characteristics like light glare and natural aging. When a reviewer knows a document is AI-generated, the differences are visible. But for an entry-level investigator working through high volumes without that prior knowledge, detection is extremely difficult.
The training challenge is equally pressing: keeping staff updated on what to look for, and updating procedures with current examples, at the same pace fraudsters are adopting new tools.
Hua shared similar observations from his experience. At TikTok, fraudster-generated names have evolved from easily detectable gibberish to sophisticated variations that defeat traditional detection models. In his previous role at a rideshare company, he saw fake driver's licenses ranging from crude hand-drawn forgeries to convincing cross-verified documents. He pointed to government database verification (APIs that connect directly to police or identity authorities in countries like China and Brazil) as one of the most effective countermeasures, but acknowledged the approach isn't universally available.
Ronan added context from Inscribe's data: one in 16 documents submitted via digital channels are tampered with, and there's been a 200% increase in the use of AI tools to alter documents. Beyond documents, the threat extends to phone call verification (where LLMs produce real-time scripts to defeat security questions) and even spending pattern fabrication to make fraudulent accounts appear legitimate.
When asked whether AI is the answer to AI-powered fraud, Angela was measured: it's part of the solution, but not the full answer. She drew a parallel between fraud teams and fraudsters—both are optimizing for scalable, repeatable efficiency. That makes it a closely matched battle.
Her emphasis was on responsible integration. Before deploying AI, institutions need to define what they will and won't use it for, build governance and controls around it, and ensure they can measure performance and explain outcomes. AI can excel at real-time data analysis and pattern detection that the human eye would miss, especially for document review at scale. But deploying innovation without understanding its risk profile creates new vulnerabilities.
Hua was more forward-leaning on adoption. TikTok is restructuring its risk teams to invest significantly more in AI capabilities through 2026. He framed the decision in terms of adversarial dynamics: your counterpart is evolving, so you must evolve at least as fast. AI offers immediate value in reducing the cost of operations (BPO teams are expensive and limited), accelerating review times, and eventually providing advisory or even autonomous decision-making.
He also raised an underappreciated strategic dimension: increasing the cost for fraudsters to commit fraud. By making attacks more expensive and less profitable, institutions can discourage attempts altogether. That effort requires not just technology but regulatory cooperation—something he sees as an ongoing industry need.
Both guests converged on a shared recommendation for where AI can deliver the most impact in the near term.
Angela pointed to behavioral biometrics: using AI to detect anomalies in legitimate, aged customer accounts at transaction speed. Her reasoning: social engineering and account takeover attacks have already penetrated the institution's perimeter by the time they reach the transaction layer. The question is whether AI can identify behavioral deviations fast enough to intervene before funds move. She acknowledged this is easier said than done at the scale of a large financial institution, but sees it as the critical gap that recent scam waves have exposed.
Hua described the same concept through the lens of sequential modeling. Traditional approaches count discrete events (like how many logins in a period) which fraudsters can now mimic. But linking behaviors across a timeline (registration, app usage, browsing patterns, transaction initiation) creates a much harder pattern to replicate. The computational cost is high, but it's precisely where AI and LLMs can deliver disproportionate value. He committed to investing heavily in this direction through 2026.
Perhaps the most important insight from this conversation is how much the fraud challenges at a global content platform and a major financial institution have already converged. TikTok manages payments, verifies identities, and fights account takeover, which are functions that look indistinguishable from banking. Meanwhile, Discover is racing to build digital-native experiences that look increasingly like those of a tech platform. Both face account takeover. Both grapple with document fraud. Both must balance friction against growth. Both are racing to deploy AI responsibly while their adversaries deploy it without constraint.
The organizations that will lead in 2026 are those that:
The fintech-bank divide is collapsing. But the bigger shift may be this: traditional banks are increasingly operating like fintechs, and inheriting the same digital risk profile in the process.
Ronan Burke is the co-founder and CEO of Inscribe. He founded Inscribe with his twin after they experienced the challenges of manual review operations and over-burdened risk teams at national banks and fast-growing fintechs. So they set out to alleviate those challenges by deploying safe, scalable, and reliable AI.
Angela Diaz, CRMP, is a Senior Principal of Operational Risk Management at Discover. Angela brings deep experience from the banking side, where risk programs meet scale and regulatory oversight. Angela brings deep experience from the banking side, where risk programs meet scale and regulatory oversight.
Hua Li is the Head of Payment Risk at TikTok, where he leads efforts to protect the financial health and customer experience of TikTok's global payment ecosystem. With experience spanning digital platforms and rideshare companies, he brings deep expertise in fraud prevention across high-velocity, digital-first environments.
Start your free trial to catch more fraud, faster.