
Michael Gentry
By Michael Gentry, shareholder at Reinhart Boerner Van Deuren, a WBA Bronze Associate Member
While generative AI systems have begun to reshape how many businesses operate, banks and financials must act more deliberately in their adoption of new technologies. Regulation requires financial institutions to carefully evaluate these tools through existing frameworks. Banks cannot set aside compliance teams and risk management considerations around consumer protection, data privacy and cybersecurity to adopt AI systems.
That measured approach reflects strength, not hesitation. Banks already maintain strong cybersecurity programs, layered controls and employ professionals who manage complex threats every day. These defenses give banks a solid foundation for AI governance, but they do not remove the need for it.
Criminals, by contrast, do not need to convince a committee to adopt a new technology. They increasingly use AI to enhance fraud schemes, exposing banks to losses, regulatory scrutiny and litigation. The increasing use of deepfakes in impersonation scams illustrate the risk. Fraudsters now use AI‑generated audio, video and images to impersonate executives, customers and vendors, pressuring employees to authorize wire transfers, change payment instructions or disclose sensitive information. All in, the Federal Trade Commission (FTC) reported that impersonation fraud ranked among the most common fraud categories in 2024, with $2.95 billion in losses to U.S. businesses and consumers.
This AI adoption by threat actors heightens exposure for financials. Payment systems move quickly. Employees make decisions under time pressure. Remote and hybrid work reduces informal verification. AI‑enabled impersonation can exploit these conditions and bypass traditional cybersecurity controls.
This AI Adoption Gap Creates a Governance Gap
This imbalance creates an AI governance gap: banks must operate deliberately, while criminals act without constraint. IBM’s 2025 Cost of a Data Breach Report illustrates the risk of not investing in AI-specific governance measures. Thirteen percent of surveyed organizations reported breaches involving AI models or applications. Of those surveyed, 97percent lacked controls governing internal AI use. Sixty‑three percent reported not having an AI governance policy.
Banks can close this gap by extending governance beyond cybersecurity. Technical safeguards remain essential, but they may prevent employees or vendors from introducing unapproved AI into workflows. Without clear rules and oversight, well‑intentioned staff may expose data, weaken controls or create new attack paths.
Don’t Wait on Slow, Uncertain Regulation
When not stalled by political debate, federal regulators are focused on home‑grown AI dominance rather than enacting unified AI regulatory framework. States have moved ahead independently, creating a patchwork of laws that vary by jurisdiction and use case.
Some states have taken broader, risk‑based approaches. Colorado’s Artificial Intelligence Act, scheduled to take effect in June 2026, imposes consumer‑protection obligations on developers and deployers of high‑risk AI systems, which will have a direct impact on financials’ uses of AI in lending. Other states have targeted specific applications. An Illinois law recently became effective, requiring employers to disclose any use of AI in hiring, promotion, discipline or related actions.
But the future of these and other state laws is murky. In December 2025, President Trump issued an executive order seeking to curb state AI laws, directing federal agencies to challenge regulations the administration views as barriers to innovation and domestic competitiveness.
This tension leaves financials operating amid overlapping and unsettled obligations. Waiting for federal clarity invites risk. Organizations that delay assessing and managing AI use will compound the risks brought along by the governance gap and the speed of threat actors’ adoption. Internal governance provides the only stable path to control risk while the legal landscape continues to shift.
Here are five practical steps for financials looking to advance their AI governance:
- Establish a clear AI governance policy tying permissible AI use to existing risk frameworks.
- Ensure that the governance program is managed by an empowered committee with deep knowledge of those frameworks.
- Strengthen vendor controls and contractual requirements around AI use.
- Train employees to counter AI‑enabled fraud and avoid unapproved tools.
- Prepare incident response and litigation strategies addressing AI misuse.
Why This Matters for Bank Leadership
AI‑enabled fraud already affects financial institutions. Banks do not lack expertise or infrastructure. They manage complex risk every day. By extending existing safeguards into a clear AI governance framework, banks can prevent rogue AI use, reduce exposure and demonstrate diligence to regulators and courts. Institutions that act now to circumvent the governance gap will stand strongest as AI‑driven threats evolve.