Skip to main content

As the financial sector races to embrace artificial intelligence, a new frontier is rapidly emerging: agentic AI in finance.

These systems move from passive automation to decision-making, initiating actions, and operating with increasing autonomy. But with this rise comes a sharp challenge. While adoption is soaring, readiness to govern these systems effectively is lagging behind.

Recent global surveys reflect this tension. Over 90% of financial services firms in Europe report using AI in some capacity, and 72% of institutions globally have at least one machine learning system in production. Yet only a third feel truly prepared to scale AI safely and responsibly. The gap is especially stark when it comes to agentic AI, where the legal, ethical, and operational risks multiply rapidly.

Adoption accelerates, but maturity is uneven

The rise of agentic AI in finance is not speculative; it’s already here. Banks and insurers are deploying AI agents in fraud detection, trading platforms, customer engagement, and even advisory roles. The technology promises transformative benefits, such as speed, efficiency, personalisation, and insight. But as EY’s 2024 European survey warns, enthusiasm is outpacing control frameworks. Fewer than 31% of firms say they’re “on track” with AI integration.

Further, the World Economic Forum’s 2025 report echoes the sentiment: while AI is being integrated into critical financial infrastructure, questions remain around transparency, explainability, and accountability, especially for autonomous systems.

Governing agentic AI in finance: a new imperative

Governance frameworks for AI must now evolve beyond static compliance checklists. Institutions need to develop dynamic, lifecycle-based controls that match the fluid nature of agentic systems.

Key features of effective governance for agentic AI in finance include:

  • Board-level oversight and clear lines of accountability
  • Granular risk-tiering of agentic use cases, particularly those with customer or market-facing impact
  • Sandbox environments to test agentic behaviours before release
  • Red-teaming and adversarial testing to stress-test AI decision-making
  • Continuous monitoring for drift, hallucinations, and non-compliant behaviour

The risks here are not hypothetical. Agentic AI systems can take unintended actions, base outputs on biased data, or breach legal obligations, sometimes without any clear audit trail. Managing these risks demands a blend of legal insight, technical rigour, and cultural change.

Acceptable use policies: often outdated, now critical

In most firms, acceptable use policies (AUPs) have not kept pace with the evolution of AI. Traditional AUPs focus on general IT usage or narrow definitions of automation. For agentic AI in finance, this is no longer sufficient.

A modern AI acceptable use policy should include:

  1. A clear taxonomy of agentic systems used within the institution
  2. Defined boundaries for acceptable and prohibited uses
  3. Trigger points for human intervention
  4. Protocols for customer transparency and consent
  5. Monitoring and escalation pathways when agents behave unexpectedly

Guidance from regulators is starting to take shape. Singapore’s FEAT principles and the UK’s five guiding principles (fairness, transparency, safety, accountability, and contestability) offer a solid foundation. But for now, financial institutions must lead the way in defining what responsible deployment of agentic AI looks like.

Regulatory uncertainty and talent shortages

Another recurring theme is the mismatch between rapid adoption and regulatory clarity. There is no global standard yet for managing agentic AI in finance, and even national frameworks are still evolving. This leaves institutions exposed to jurisdictional risk and regulatory lag.

Moreover, many firms simply lack the talent to manage AI risk effectively. Roles like AI ethicist, algorithm auditor, or data governance architect are still new, and in short supply. Without skilled people to translate policy into practice, even the best frameworks can fail.

The way forward: building a new social contract for financial AI

Agentic AI in finance is here to stay. Its power to transform decision-making, operations, and customer relationships is undeniable. But with power comes responsibility.

To earn public trust and regulatory confidence, financial institutions must invest in robust governance, meaningful acceptable use policies, and a clear ethical framework. They must move from experimentation to operationalisation, from innovation theatre to infrastructure.

In doing so, they won’t just manage risk, they’ll lead the way in setting global standards for the safe, fair, and transparent use of artificial intelligence in financial services.

How ITLawCo can help

At ITLawCo, we specialise in bridging the legal, technical, and strategic divides that financial institutions face in adopting agentic AI responsibly.

Whether you’re experimenting with autonomous agents, scaling production models, or responding to regulatory scrutiny, we offer:

  • Custom-designed AI governance frameworks aligned with global best practices
  • Acceptable use policies tailored to your business model, risk appetite, and regulatory obligations
  • Board and executive briefings that demystify AI risk and outline practical paths to compliance
  • Lifecycle support, from model design reviews and vendor contracts to incident response playbooks
  • Training and upskilling, helping your teams embed responsible AI principles across departments

We understand the unique challenges of agentic AI in finance because we work at the intersection of innovation and law.

If you’re ready to turn risk into leadership and compliance into a competitive advantage, let’s talk.