Intellidata Logo
INTELLIDATAPart of Converge Group
Back to Blog
Regulatory Insights

Regulating AI in Financial Services: What the 2026 Landscape Means for Your Platform

Regulatory bodies across multiple jurisdictions are moving from guidance to enforcement when it comes to AI in financial services. Here is what institutions need to understand before their next exam.

ZD

Zanele Dlamini

Regulatory Affairs Lead

3 February 2026
7 min read
RegulationAI ActModel GovernanceComplianceExplainability

A Shifting Regulatory Posture

For several years, regulators in financial services took a broadly exploratory approach to AI: issuing guidance, encouraging innovation, and establishing principles without hard enforcement postures. That era is ending.

In 2025 and into 2026, multiple jurisdictions — including the EU under the AI Act, the UK's FCA, South Africa's FSCA, and US regulators via updated SR 11-7 guidance — have begun treating AI explainability, bias testing, and model governance as examination priorities rather than aspirational standards.

For institutions deploying AI in financial crime detection, credit decisioning, or customer risk scoring, this shift has material compliance implications.

The Explainability Imperative

The most common regulatory concern around AI in financial services is explainability: the ability to articulate why a particular decision was made in terms that are comprehensible to a human reviewer, an affected customer, or an examiner.

This requirement is not new in spirit — the logic of adverse action notices in credit decisions has always required some explanation — but AI systems, particularly deep learning models, have historically produced decisions that resist intuitive explanation.

The regulatory response has been pragmatic: a bias toward model types that support intrinsic explainability (decision trees, logistic regression, gradient boosting with SHAP values) and a requirement for rigorous post-hoc explanation frameworks when more complex architectures are deployed.

Model Governance and the Audit Trail

Beyond explainability, regulators expect institutions to maintain comprehensive documentation of the AI models they deploy:

  • Model development methodology and validation results
  • Training data provenance and bias testing outcomes
  • Performance monitoring over time, including drift detection
  • Change management procedures when models are updated
  • Clear accountability for model outcomes within the institution

For platforms like Themis that support regulatory-facing decisions — SARs, customer risk ratings, PEP and sanctions screening — this documentation is not optional. It is the difference between a defensible compliance programme and a significant regulatory finding.

What Institutions Should Be Doing Now

Based on the regulatory trajectory we are observing, we recommend that institutions take the following steps in 2026:

  1. 1Conduct a model inventory: Map all AI/ML models that influence customer risk ratings, transaction monitoring, or compliance decisions.
  2. 2Assess explainability: For each model, evaluate whether current outputs are explainable to a non-technical examiner.
  3. 3Establish monitoring cadences: Implement regular model performance reviews that track both accuracy metrics and potential bias drift.
  4. 4Document the governance chain: Ensure that accountability for each model is clearly assigned and documented.

The Intellidata Approach

Themis was built with regulatory defensibility as a first-order requirement. Every alert generated by Themis carries a structured explanation — grounded in observable data, expressed in plain language, and traceable to the specific model features that drove the output.

This is not a compliance bolt-on. It is an architectural decision that reflects our conviction that intelligence without accountability is not intelligence — it is liability.

Related Articles