AI-Powered Collections: Introducing Agentic Procedures
Industry Report

Trust in AI

What Finance Leaders Need to Embrace Artificial Intelligence

A pharmaceutical company CFO receives a voicemail from his CEO about an urgent verbal contract needing immediate funding. The voice is unmistakable — same tone, same speech patterns. A follow-up email reinforces the urgency. Within hours, $200,000 is wired to fraudsters who used AI voice cloning technology to impersonate the CEO.

This 2021 incident illustrates a fundamental challenge facing finance leaders today:

AI is neither inherently trustworthy nor untrustworthy. It’s a tool that produces outcomes based on how it’s deployed and who deploys it.

Billtrust surveyed 500 finance professionals and C-suite decision makers to understand what shapes their trust in AI and what they need to feel confident about adopting AI-powered systems.

To explore these trust dynamics, we examined AI through the lens of fraud and financial crime, a domain where the consequences of irresponsible AI deployment are stark and the contrast between trustworthy and untrustworthy AI systems is dramatic. How finance leaders respond to criminal misuse of AI reveals broader insights about what they need from any AI system: transparency, accountability, human oversight, and ethical constraints.

Our research reveals that finance leaders aren’t rejecting AI—they’re approaching it with caution. With 82% expressing concern about AI’s potential for misuse, the question isn’t whether finance will adopt AI, but what responsible AI implementation looks like and how organizations build trust in these systems.

The Trust Challenge: When AI Is Deployed Irresponsibly

Finance leaders’ wariness of AI isn’t irrational technophobia. It’s based on witnessing what happens when AI is deployed without proper oversight, transparency, or ethical guardrails. Criminal misuse of AI provides the clearest example of what irresponsible AI looks like, and finance teams are encountering these consequences firsthand:

concerned woman looking at computer
0 %

report AI-generated phishing emails sophisticated enough to fool experienced staff

0 %

have encountered AI-created emails that perfectly mimic executive or vendor communication styles

0 %

report AI-generated fake invoices with convincing branding and formatting

0 %

have seen AI voice cloning used to impersonate known contacts

These experiences shape how finance professionals think about AI broadly. When they see AI deployed without accountability, oversight, or ethical constraints (as criminals do), it raises fundamental questions: How do we know when AI is being used responsibly? What distinguishes trustworthy AI from untrustworthy AI? What safeguards need to exist?

Respondents ranked AI voice cloning as their second-highest concern, not because the technology itself is problematic, but because it demonstrates how AI without proper constraints can undermine fundamental trust in communication. The pharmaceutical company case took 10 days to detect. At engineering giant Arup, finance workers attended what they thought was a video call with executives, only to discover later that deepfakes had fooled them into transferring $25 million.

These incidents don’t argue against AI. They argue for responsible AI deployment with appropriate safeguards, transparency, and human oversight.

What Finance Leaders Value: The Right Balance Between AI and Human Judgment

Finance leaders understand that the future of their operations lies in combining AI’s analytical power with human expertise.

Our data shows that 76% of financial decision-makers believe they would catch a fraudulent invoice before processing payment, reflecting confidence in their teams’ judgment and experience. And in fact, many do: 57% of teams actively flag six or more potentially suspicious invoices monthly. Among companies processing over 5,000 invoices monthly, 45% flag six or more suspicious requests and 20% flag more than 10.

Finance professionals are successfully catching threats using the judgment and pattern recognition they’ve developed through years of experience. But this reveals a challenge: teams are dedicating significant time and attention to invoice review. For high-volume organizations, this manual scrutiny becomes unsustainable, pulling experienced professionals away from strategic work.

This is where responsible AI creates value. Rather than replacing human judgment, AI can handle the computational work by analyzing thousands of transactions, identifying patterns across historical data, and flagging anomalies for human review. This frees finance professionals to focus their expertise where it’s most valuable: building vendor relationships, optimizing cash flow, analyzing business performance, and yes, evaluating the genuinely suspicious cases that require contextual understanding and judgment.

Moreover, when asked about their greatest concern regarding AI and fraud, 46% cited that AI-generated fraud might become so realistic that human review wouldn’t be sufficient on its own. The insight here isn’t that humans should do more work. It’s that humans need better tools. AI that can process volume and flag anomalies at scale gives human experts the information advantage they need to make confident decisions about the cases that matter most.

The partnership works because AI and humans excel at different things. AI handles speed, volume, and pattern recognition across massive datasets. Humans provide contextual judgment, ethical reasoning, and the ability to understand nuance. Together, they create a more capable system than either could achieve alone, and one where finance teams can do more with existing resources rather than being overwhelmed by increasing transaction volumes and sophisticated threats.

Current fraud detection practices:

0 %

use bank account verification services (systematic but human-reviewed)

0 %
employ multi-step approval workflows (multiple human checkpoints)
0 %

rely on automated vendor database cross-referencing (technology supporting human decisions)

0 %

depend on manual phone calls for verification (direct human contact)

0 %

rely on staff recognition and experience (human expertise)

The Visibility and Transparency Imperative

Trust requires visibility. One of the most striking findings from our research is that 27% of organizations don’t track suspicious activity at all or aren’t sure of their numbers. This represents a fundamental trust problem, not with AI specifically, but with any system that operates without transparency or accountability.

Organizations that flag suspicious invoices demonstrate a critical principle of trustworthy systems: visibility into what’s happening and why. When teams can see that they’re flagging six, ten, or more potential threats monthly, they understand their risk landscape. They can assess whether their controls are working, identify patterns, and make informed decisions about where to invest in additional safeguards.

The 27% without this visibility face a different reality. Without systematic tracking and transparency, they have no way to assess risk, measure effectiveness, or build confidence in their processes. This blind spot represents the opposite of what makes systems trustworthy, whether those systems involve AI or not.

What this reveals about trust in AI systems

Finance leaders need AI systems that provide:

Transparency

Clear visibility into what the system is doing and why

Auditability

Comprehensive records that can be reviewed and verified

Explainability

Understanding of how decisions are made, not just black-box outputs

Accountability

Clear ownership and responsibility for outcomes

For organizations processing thousands of invoices, AI without these characteristics would simply move the blind spot from manual processes to automated ones. Trust isn’t built by replacing human judgment with opaque algorithms. It’s built by creating systems where humans can see, understand, and verify what’s happening.

The Speed and Scale Dilemma

Finance leaders recognize how rapidly their workplace is changing. Our survey reveals that 70% expect AI-powered activities in finance to increase over the next 12 months, with 27% anticipating a dramatic surge. In fact, 83% plan to implement AI-enabled solutions over the next two years.

This growth reflects the real benefits that finance teams are already experiencing. Recent Billtrust research shows that 90% of financial decision-makers now rely on AI for financial decisions, with 83% reporting that AI has positively influenced their approach to managing financial risk in 2025. The investment follows the value: nearly one in five executives are dedicating more than a quarter of their budget to AI initiatives.

But how do you maintain the human oversight that builds trust while operating at the speed and scale that modern business demands?

Traditional approval hierarchies and manual verification methods work when transaction volumes are manageable, and threats develop slowly. But as business accelerates and AI tools become ubiquitous, both in beneficial applications and criminal misuse, purely manual approaches face practical limitations.

Consider the mathematics: an organization processing 5,000 invoices monthly with a team manually reviewing each one faces different constraints than one processing 500. The risk isn’t just that high volume creates fatigue and mistakes. It’s that the need for speed may push organizations toward automation without the trust-building characteristics they need.

This is where responsible AI implementation becomes essential. Finance leaders need systems that can operate at scale and speed while maintaining the visibility, transparency, and human oversight that create trust.

What Responsible AI Looks Like in Finance

Responsible AI in finance operations requires several key characteristics:

human in the loop

Human-in-the-Loop Architecture

The most critical element is maintaining meaningful human oversight. This doesn’t mean humans manually reviewing every transaction. That’s neither practical nor necessary. It means designing systems where:

  • AI handles pattern recognition and anomaly flagging at scale
  • Humans make final decisions on significant actions
  • The system escalates unusual or high-risk situations to human reviewers
  • Humans can override AI recommendations when context demands it

Currently, 91% of organizations use data analysis for fraud prevention, showing appetite for technology-assisted operations. The most successful implementations maintain meaningful human oversight while leveraging AI’s analytical capabilities at scale.

Transparency and Explainability

Finance leaders need to understand why an AI system flags certain transactions or approves others. Black-box algorithms that provide outputs without explanation don’t build trust, they just transfer it. Responsible AI systems should:

  • Explain what factors triggered a flag or recommendation
  • Provide visibility into the patterns and data driving decisions
  • Allow humans to verify the reasoning behind AI outputs
  • Create audit trails that can be reviewed and understood

The 27% of organizations lacking visibility into suspicious activity demonstrate what happens without transparency: uncertainty, inability to assess effectiveness, and lack of confidence in controls.

explainability
human oversight

Continuous Human Oversight and Governance

Trustworthy AI systems don’t operate autonomously without accountability. They require:

  • Regular review of AI system performance and accuracy
  • Human governance over how the system evolves and what it learns
  • Clear accountability for outcomes and decisions
  • Ability to adjust parameters and rules based on changing conditions

The 63% of finance leaders using multi-step approval workflows demonstrate understanding that important decisions benefit from multiple perspectives and checkpoints. Responsible AI should enhance these workflows, not eliminate them.

Secure Implementation and Ethical Deployment

The criminal misuse of AI that concerns 82% of finance leaders highlights why deployment matters as much as capability. Responsible AI requires:

  • Secure systems that can’t be easily compromised or manipulated
  • Ethical guidelines for how AI should and shouldn’t be used
  • Safeguards against misuse, both external (criminal) and internal
  • Clear boundaries on what AI should automate and what requires human judgment
secure implementation
digital infrastructure

Support for Digital Infrastructure

Moving from paper-based to digital processes creates the foundation for responsible AI implementation. Digital systems provide:

  • Comprehensive audit trails that support transparency
  • Structured data that AI can analyze reliably
  • Verification checkpoints that can be systematically enforced
  • Real-time visibility that enables human oversight at scale

Organizations still dependent on paper invoices and manual processes face challenges implementing any form of trustworthy AI. The infrastructure doesn’t support the transparency and auditability that trust requires.

Women and man in office looking at computer screen
Women and man in office looking at computer screen

Building Trust: The Path Forward

Finance leaders resisting AI aren’t doing so because they are technophobes. They’re professionals who understand the stakes and want systems they can trust. The fact that 83% plan to implement AI-enabled solutions over the next two years shows they are ready to adopt AI systems that demonstrate the characteristics of responsible implementation: transparency, human oversight, accountability, and explainability.

Here's What Finance Leaders Need to Trust AI

proof icon

Proof of Reliability

Track records demonstrating accuracy and effectiveness, not just promises of capability

control icon

Maintained Control

Systems that enhance human decision-making rather than removing humans from critical processes

Hand icon with palm facing outward similar to the stop gestor.

Clear Accountability

Understanding of who’s responsible when AI systems make mistakes or produce unexpected outcomes

handshake icon

Ongoing Partnership

Vendors and systems that evolve with feedback rather than operating as static black boxes

values icon

Alignment with Values

AI deployment that respects the importance of human expertise, judgment, and ethical boundaries

Organizations that provide this responsible AI implementation will establish the new standard for financial operations in the digital age.

At Billtrust, we believe trust is built through transparency, human oversight, and proven results. Our approach to AI prioritizes augmenting human expertise rather than replacing it, giving finance teams the tools to work more strategically while maintaining the control and visibility they need. As finance leaders navigate AI adoption, we’re committed to demonstrating that responsible AI isn’t just possible. It’s the only path forward that honors the trust our customers place in us.

Billtrust Badge
AI Assistants eBook mock-up
The silhouette of a man with a thought bubble
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
AI Assistants eBook mock-up
Ebook

AI Assistants are Now Autonomous: The New Era of AR is Here

Autonomous AI is here, so it’s go-time. Accelerate your AI journey with a checklist for AR success that overcomes 6 key hurdles.
Right row purple icon
wakefield report
The silhouette of a man with a thought bubble
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
wakefield report
Report

AR Meets AI: Data-Driven Insights from 500 Global Finance Leaders

Discover the benefits and concerns of AI in AR. See what 500 finance leaders have learned about striking the right balance.
Right row purple icon
confident professional in a storage room
The silhouette of a man with a thought bubble
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
confident professional in a storage room
Blog

Stop Casting a Giant Net: Targeted, AI Approaches to Collections are Smarter

Tired of chasing payments? Shift from a reactive loop to a precision strategy with AI collections tools. Here’s how.
Right row purple icon

Frequently asked questions

Why are finance leaders cautious about adopting AI?

Finance leaders are cautious because they are concerned about AI’s potential for misuse, with 82% expressing this concern. They have witnessed the rise of sophisticated AI-generated fraud, such as voice cloning and deepfake phishing emails. As a result, they require systems built on transparency, accountability, and human oversight.

Responsible AI in finance is not about replacing humans but augmenting them. It uses a “human-in-the-loop” architecture where AI handles large-scale data analysis and anomaly detection, while humans make the final critical decisions. This approach must also be transparent, explainable, and secure.

Trust is built when systems provide proof of reliability, maintain human control, and offer clear accountability. Finance leaders need to understand how an AI system makes decisions (explainability) and have the ability to audit those results, ensuring humans remain in control of critical processes.