Skip to main content

The Financial Sector Conduct Authority (FSCA) and the South African Reserve Bank’s Prudential Authority (PA) have released the country’s most comprehensive analysis to date on AI in the financial sector. Based on more than 2,000 survey responses and a detailed databook, the FSCA–PA AI report establishes a clear regulatory baseline and signals a new era in AI governance.

AI has shifted from innovation theatre to a regulatory priority.

This article distils what the report means for financial institutions, the risks and governance implications, and how organisations should prepare for the supervisory direction now emerging.


1. A new regulatory posture: Coordinated, assertive, unavoidable

The FSCA and PA are aligning on a common supervisory stance toward AI: one that blends global best practice with South Africa’s twin-peaks model.

What regulators are signalling

  • AI systems will increasingly be overseen through risk-based classification
  • Mandatory model governance (validation, monitoring, drift detection)
  • Human oversight requirements for high-impact AI
  • Transparent communication when AI influences consumer outcomes
  • Tightened alignment with POPIA, cybersecurity Joint Standards, and market conduct duties
  • Emphasis on fairness, explainability, data quality, and ethical AI

South Africa is unlikely to introduce a broad “AI Act” immediately, but sector-specific AI expectations are coming rapidly.

2. Adoption reality: Widespread but shallow

The report shows that AI is widely discussed but not deeply embedded—yet.

Where the sector stands

  • Only 10.6% of surveyed institutions currently use AI
  • AI is strongest in banks, insurers, and payments providers
  • Adoption is concentrated in low- and medium-risk functions

Top uses today

Traditional AI / ML

  • Fraud detection
  • AML/CFT monitoring
  • Process automation
  • Risk and IT operations

Generative AI

  • Sales & marketing
  • Customer support
  • Internal process optimisation

High-stakes decisioning (credit scoring, underwriting, prudential models) remains tentative, mainly due to explainability and data governance challenges.

3. Investment patterns: A two-speed sector

  • ~50% of institutions plan to invest under R1 million
  • 45% of banks intend to invest over R30 million

This creates:

  • Tier 1: Major banks accelerating aggressively
  • Tier 2: Mid-sized insurers and asset managers following cautiously
  • Tier 3: Fintechs, lenders, and pension funds experimenting at small scale

Regulators will need to monitor concentrated AI capabilities among top-tier banks, which could influence systemic stability.

4. Risk landscape: Data, security, explainability

The databook highlights a hierarchy of risks that mirrors global patterns.

Top technical risks

  • Data privacy (POPIA)
  • Data quality
  • Data security
  • Explainability & interpretability
  • Bias, fairness, representativeness

Top organisational risks

  • AI-enabled cybersecurity threats
  • Accountability gaps
  • Poor transparency to customers
  • Workforce transition pressures
  • Third-party dependencies & model concentration risks

Data governance is the backbone of AI governance.

The Information Regulator now implicitly becomes part of the AI ecosystem.

5. Governance maturity: Strong foundations, serious gaps

Where the sector is strong

  • Risk management frameworks (68%)
  • Data governance frameworks (55%)
  • Dedicated accountable executives (50%)
  • Ethical principles (47%)

Where it is weak

  • 34% of institutions use no explainability methods
  • Only 13% use SHAP or comparable tools
  • Fairness testing is inconsistent
  • Third-party model risks under-recognised

These gaps are precisely where supervisory expectations will harden.

6. How South Africa aligns globally

South Africa’s regulatory trajectory is converging with:

  • EU AI Act: risk tiers, oversight, documentation
  • UK FCA/BoE: model governance & accountability
  • MAS FEAT / Veritas: fairness, ethics, transparency
  • NIST AI RMF: lifecycle-based risk management

However, SA will likely adopt a phased, supervisory-first approach rather than a sweeping legislative framework.

7. What financial institutions need to do next

Banks

  • Implement full Model Risk Management 2.0 for AI
  • Enhance explainability across credit, fraud, AML, and underwriting
  • Strengthen oversight of cloud, APIs, and third-party AI tools
  • Establish AI assurance functions

Insurers

  • Prepare for fairness & transparency requirements in pricing and claims
  • Enhance documentation for automated decisioning
  • Tighten model monitoring for drifts and anomalies

Investments

  • Document ML-driven trading models
  • Strengthen market manipulation controls

Payments & fintech

  • Bolster cybersecurity
  • Prioritise identity verification risks
  • Prepare for transparency requirements in automated systems

8. ITLawCo’s advisory view

The message is clear: AI governance is now a strategic, regulatory, and operational necessity.

Service categoryDescription
AI governance frameworks & operating modelsDesigning end-to-end governance structures, roles, controls, and assurance mechanisms for AI systems across the enterprise.
POPIA-compliant AI lifecycle designEmbedding privacy-by-design, data minimisation, lawful processing, and accountability into every stage of the AI model lifecycle.
AI risk assessments & assurance reviewsConducting model risk evaluations, bias and fairness assessments, DPIAs, and independent AI assurance reviews.
Model explainability toolkitsProviding explainability frameworks, methods (SHAP, LIME, ICE, PDP), documentation templates, and governance for transparent AI decisioning.
Responsible GenAI usage policiesDeveloping organisation-wide GenAI usage rules, controls, ethical guidance, and safeguards aligned with regulatory expectations.
Executive & board trainingDelivering strategic training on AI governance, ethics, legal risks, fiduciary obligations, and regulatory readiness for leadership teams.

We help organisations navigate the intersection of AI, ethics, regulation, strategy, and risk. To assess your organisation’s AI readiness or prepare for the regulatory trajectory signalled by the FSCA and PA, contact ITLawCo’s AI Governance team.

FAQs

What is the FSCA–PA AI report?

It is the first comprehensive regulatory study assessing AI adoption, risks, and governance practices across South Africa’s financial sector.

What does the report mean for financial institutions?

Institutions should prepare for clearer expectations around explainability, fairness, data governance, model risk, consumer transparency, and supervisory oversight.

Where is AI being used most today?

Fraud detection, AML/CFT monitoring, internal process optimisation, sales, marketing, and customer support.

Will South Africa introduce formal AI regulation?

Not immediately. Guidance and supervisory expectations will come first, followed by a gradual move toward structured requirements.

How should institutions prepare?

By implementing AI governance frameworks, strengthening data quality, improving explainability, documenting high-impact models, and ensuring POPIA alignment.

Publication details

Written by: Nathan-Ross Adams
Founder & MD, ITLawCo
Specialist in AI GRC

Disclaimer

This article is for informational purposes only and does not constitute legal advice. For formal guidance, please contact ITLawCo’s AI Governance practice.