Industry Thoughts

How Chief Risk Officers are guiding the responsible adoption of AI

minute read

The financial services industry is at a pivotal point in the adoption of artificial intelligence.

From large language models (LLMs) to behavioral biometrics, AI is reshaping how banks and fintechs approach fraud detection, compliance, and customer operations.

What’s often overlooked in these conversations, however, is the role of the risk leader in guiding this transition.

In a recent Good Question podcast episode, I spoke with two highly experienced Chief Risk Officers, Laurel Sykes (EVP and CRO at American Riviera Bank) and Michelle Proshaka (Chief Thinking and Risk Officer at Nymbus), about how they’re leveraging AI not just to enhance controls, but to enable innovation within a well-structured risk and compliance framework.

Their insight confirmed a central thesis: AI in financial risk management is not a disruption to be feared; it’s a capability to be governed and optimized.

AI is already operational in Risk functions

Both Laurel and Michelle are actively implementing AI-based solutions across core components of their risk architecture. At American Riviera Bank, Laurel’s team uses:

  • LLM-powered transaction monitoring tools to detect elder financial abuse by evaluating behavior against known risk patterns.

  • Natural language models that assess email content for indicators of fraud, providing alerts to customers or designated contacts.

  • AI-assisted writing tools (e.g., Microsoft Copilot) to generate internal memos, board reports, and customer education materials more efficiently.

At Nymbus, Michelle oversees a platform that integrates:

  • Behavioral biometrics and device intelligence via vendors like DataVisor, analyzing mouse movement, typing cadence, and device fingerprinting to detect bots and scripted fraud.

  • Real-time fraud interdiction systems that trigger automated step-up authentication or transaction holds based on dynamic risk scoring.

  • Regulatory change monitoring platforms (e.g., MitraTech) to maintain visibility into shifting compliance requirements and integrate them into product design.

These are not proof-of-concept pilots. They are production-level systems, integrated into day-to-day operations and used to drive real business outcomes.

Risk management isn’t opposed to innovation (It enables it)

The perception that risk and compliance teams serve only as control functions is outdated. Increasingly, these teams are embedding into product development, helping organizations build systems that are compliant by design.

Michelle emphasized this shift directly: “If risk is involved from the beginning, you reduce the likelihood of rework or regulatory exposure later on. Our team exists to improve outcomes; not delay them.”

Both leaders described how early-stage collaboration between product, engineering, and risk reduces the complexity of later-stage governance. Instead of acting as gatekeepers, modern risk officers are operating as strategic advisors, defining acceptable parameters for experimentation and deploying control mechanisms that scale with new products.

AI use cases must be aligned with policy and oversight

Laurel highlighted an important concern: many financial institutions do not have adequate visibility into how AI is being deployed internally. This creates significant risks not only for model governance, but for customer safety and regulatory compliance.

Both Laurel and Michelle advocated for centralized AI governance frameworks that include:

  • Inventory tracking for all AI and ML models used across business functions

  • Use case classification, documenting whether models are used for decision-making, automation, fraud detection, or content generation

  • Risk assessment procedures tailored to the type of AI system in use (e.g., supervised vs. unsupervised models, deterministic vs. generative outputs)

  • Human-in-the-loop controls for edge-case review, especially in high-sensitivity areas such as transaction blocking or customer offboarding

As Michelle put it: “Our goal is never 100% automation. There’s always value in having analysts in the loop to assess context, investigate anomalies, and bring human judgment to the process.”

AI can strengthen both control and communication

An interesting pattern emerged in the conversation: AI is being used not only for detection and decisioning, but also for communication and education.

Both risk teams are leveraging generative AI tools to:

  • Draft suspicious activity reports (SARs)

  • Generate internal documentation for audits and board reporting

  • Produce customer-facing fraud alerts and educational content in near real time

While outputs are reviewed for accuracy and tone, these tools are significantly increasing operational efficiency—reducing cognitive load on teams while accelerating time-to-response in both internal and external contexts.

Strategic Risk leadership will define AI adoption trajectories

Ultimately, the responsible integration of AI into financial systems will depend on leaders who understand both its potential and its limitations.

Michelle described how her team created a vision, mission, and values for risk—not as a branding exercise, but to anchor their role in enabling progress. Laurel emphasized the importance of prioritization and empathy—of being technically rigorous, but also attuned to how risk decisions affect real people.

Their work illustrates that risk and compliance leaders are no longer reactive policy enforcers. They are proactive stewards of organizational integrity, helping their companies adopt emerging technologies while maintaining public trust and regulatory confidence.

A framework for responsible AI in financial risk

AI offers enormous promise in financial services—but only if paired with the appropriate controls, oversight, and human expertise. Leaders like Laurel and Michelle show what that balance looks like in practice:

  • Thoughtful use of LLMs and biometrics to enhance fraud detection

  • Structured governance around model deployment

  • Ongoing education and clear communication with stakeholders

  • A collaborative posture that sees risk as an enabler (not an obstacle) to innovation

As the industry evolves, the question is no longer whether we should use AI in risk operations. It’s how we do so deliberately, transparently, and responsibly. And the answer lies with the people who have always done that best: risk professionals.

If you’re interested in learning more, we’d love to have a conversation. Simply reach out to schedule a meeting with our team.

  • About the author

    Brianna Valleskey is the Head of Marketing at Inscribe AI. A former journalist and longtime B2B marketing leader, Brianna is the creator and host of Good Question, where she brings together experts at the intersection of fraud, fintech, and AI. She’s passionate about making technical topics accessible and inspiring the next generation of risk leaders, and was named 2022 Experimental Marketer of the Year and one of the 2023 Top 50 Woman in Content. Prior to Inscribe, she served in marketing and leadership roles at Sendoso, Benzinga, and LevelEleven.