Artificial intelligence is now embedded across Kenya’s banking system. Institutions are deploying machine-learning tools for credit scoring, fraud monitoring, customer verification, advisory services, and operational decision-making. While this shift reflects the global trend toward technology-driven finance, Kenya’s adoption is accelerating faster than the supporting governance structures.
This article draws on two primary sources:
- The Central Bank of Kenya (CBK) Survey on Artificial Intelligence in the Banking Sector, which provides the first structured look into how banks are deploying AI and the associated risks.
- The Kenya Artificial Intelligence Strategy 2025–2030, which outlines national expectations for trustworthy, transparent and rights-respecting AI systems.
Taken together, the data shows an industry rapidly innovating but not yet sufficiently prepared for the legal, ethical, and operational risks that follow. For organisations, the implications are strategic and regulatory. For consumers, the implications are legal, especially where AI-based decisions affect access to credit, financial inclusion, or privacy.
Below is a structured analysis of how Kenyan banks are using AI, where the gaps are and, importantly, which legal levers customers could rely on when challenging harmful or opaque AI-driven outcomes.
1. How Kenyan Banks Are Using AI Today
According to the CBK Survey, banks are integrating AI in five core areas:
1.1 Credit Scoring and Lending Decisions
AI models now support:
- loan eligibility assessments
- risk profiling
- behavioural scoring
- automated approvals/declines
Machine-learning systems ingest customer transaction histories, alternative data, mobile money usage, digital footprints, and proprietary risk matrices. This reduces turnaround times but increases the risk of opaque or biased decisions.
1.2 Fraud Detection and AML Monitoring
Most banks rely on AI-enabled anomaly detection systems to flag suspicious behaviour. These models typically analyse high-volume transactions and customer behaviour patterns. Accuracy varies widely, and false positives remain a challenge, especially where the bank cannot clearly explain why a flag was triggered.
1.3 Customer Service and Interaction
Chatbots, automated call centre tools, and sentiment-analysis engines support frontline engagement. These systems collect significant personal data and sometimes access transactional information. Weak governance here increases data-misuse risk.
1.4 Operational Efficiency and Risk Management
AI is also being used to optimise liquidity, forecast risk positions, and streamline back-office processes.
1.5 Investment and Advisory Tools
A small but growing number of institutions use AI for investment recommendations and algorithmic trading, exposing them to potential suitability and negligence claims if systems mis-advise customers.
While adoption is widespread, the governance picture is less encouraging.
2. Governance Gaps Identified by the CBK Survey
The survey highlights several weak points:
2.1 Limited Bias Mitigation and Fairness Controls
A significant proportion of banks do not have formal mechanisms to detect or correct discriminatory outcomes. Given Kenya’s diverse socioeconomic context, the use of unverified training data can produce exclusionary credit models, especially for informal, low-income, or rural populations.
2.2 Lack of Explainability and Traceability
Only around half of surveyed institutions can sufficiently explain the basis of their AI-generated decisions. Explainability matters for two reasons:
- Regulators need transparency to assess compliance.
- Customers have the right to understand decisions affecting their financial wellbeing.
2.3 Weak Audit Trails
Many institutions lack comprehensive logs that track how models operate, what data they use, and how outputs are produced. Without auditability, accountability collapses.
2.4 Data Governance Gaps
Banks reported that their biggest AI-related risks are:
- weak data governance (59%)
- inadequate cybersecurity (54%)
- insufficient anonymisation/pseudonymisation
- risks arising from third-party vendors
This aligns closely with the warnings in Kenya’s National AI Strategy.
2.5 Overreliance on Third-Party Vendors
More than half of the banks depend on external AI providers. Where contractual, security, and oversight frameworks are weak, accountability becomes fragmented, and customers bear the consequences. Banks are essentially running high-impact systems on governance structures designed for a pre-AI world.
3. Alignment With Kenya’s AI Strategy 2025–2030
The national strategy emphasises:
- fairness and non-discrimination
- privacy and data protection by design
- transparency
- accountability
- secure and ethical deployment
- responsible data use in financial services
These principles are not optional aspirations, they are intended to guide sectoral regulation. In banking, this means institutions should already be operationalising them. The CBK findings suggest this is not yet happening at scale in the sector.
4. The Legal Exposure: Where AI Failures Become Lawsuits
AI does not exempt banks from existing regulatory obligations. In fact, it expands them. If a machine makes a harmful, unfair, or unexplained decision, the bank remains accountable.
Customers have four major legal pathways when challenging AI-related financial decisions.
4.1 Data Protection Act, 2019
The DPA is the strongest foundation for challenging AI-driven financial harms.
4.1.1 Section 25 – Principles of Data Processing
Banks must ensure fairness, transparency, accuracy, necessity, and purpose limitation in all data processing, including algorithmic decision-making. Biased or opaque AI models directly breach these principles.
4.1.2 Section 30 – Automated Decision-Making
This is critical.
Data subjects have the right not to be subject to a decision based solely on automated processing that significantly affects them - such as loan denial, fraud flagging, or account restriction - unless proper safeguards are in place. Safeguards include:
- human oversight
- meaningful explanation
- the ability to contest the decision
If a bank denies a loan using an opaque model and cannot justify it, the customer has a cause of action.
4.1.3 Section 41 – Accuracy of Personal Data
If AI systems use inaccurate or outdated data to reach decisions, customers can challenge the outcomes.
4.1.4 Complaints to the ODPC
The ODPC has jurisdiction over:
- unfair processing
- unlawful profiling
- excessive data collection
- weak data security
- failures in automated decision safeguards
Given the ODPC’s current trend, where compensation is frequently awarded, banks face real financial exposure.
4.2 Consumer Protection Act, 2012
AI-driven outcomes can constitute:
- unfair conduct
- unconscionable practices
- misleading representations
- failure to act with reasonable care and skill
Examples include:
- unexplained loan rejections
- incorrect risk profiles
- false fraud flags
- harmful chatbot advice framed as authoritative
Banks owe consumers a duty to provide accurate, transparent, and fair services, regardless of the technology used.
4.3 Banking Act and Prudential Guidelines
Where AI compromises:
- operational risk management
- fairness in lending
- suitability of advice
- anti-fraud safeguards,
regulators may classify this as unsafe or unsound banking practice. Customers harmed by such failures can claim negligence or breach of statutory duty.
4.4 Constitutional Petitions
If AI systems produce discriminatory outcomes, claimants can anchor cases on:
- Article 27 (Equality and Non-Discrimination)
- Article 31 (Privacy)
- Article 46 (Consumer Rights)
This will become more relevant as algorithmic discrimination becomes visible.
5. Where Litigation Is Most Likely to Arise
Based on global trends and Kenya’s current governance gaps, three scenarios stand out.
5.1 Unfair or Biased Credit Decisions
AI models may disadvantage individuals who:
- operate in informal sectors
- lack large digital footprints
- come from under-represented demographics
- use cash predominately
- have inconsistent mobile money patterns
If these patterns correlate with socioeconomic characteristics, the model can inadvertently discriminate, opening banks to DPA, CPA, and constitutional claims.
Your central resource for data protection and AI governance across Africa. Access laws, regulatory updates, case law, and practical insights to stay informed and compliant🔗 Explore DataHub Africa
5.2 Inaccurate Fraud Flags and Account Restrictions
False positives remain a major issue in the Kenyan banking/financial sector. AI models designed to detect fraud or unusual activity can misclassify legitimate customer behaviour, leading to account restrictions or delays in accessing funds.
Examples include legitimate SIM swaps flagged as potential fraud, irregular salary deposits misinterpreted as 'irreguar activity' and routine group or chama transactions flagged under anti-money laundering systems. Small businesses and informal-sector workers with fluctuating cash flows (Hawkers, boda boda riders, mitumba sellers, food vendors) penalised by lending models are also disproportionately affected. In each case, these automated errors can result in financial disruption, operational inconvenience, or reputational harm, and where the AI decision is opaque or unjustified, the bank may be held liable under data protection, consumer protection, or negligence principles.
5.3 Data Breaches via AI Pipelines
AI expands the attack surface.
Where third-party vendors mishandle data or models leak sensitive information, customers may sue for:
- breach of data protection obligations
- negligence
- violation of privacy rights
5.4 Harmful Automated Advice
If an AI-powered advisory tool recommends unsuitable products, the bank could face claims under consumer protection and negligence principles.
6. What Banks Should Already Be Doing (But Many Are Not)
From a governance and legal-risk standpoint, institutions should be:
6.1 Implementing Explainable AI (XAI)
Customers must be given meaningful reasons for AI-based decisions.
6.2 Establishing Human Oversight Mechanisms
Automated outcomes cannot be final without review options.
6.3 Conducting Algorithmic Impact Assessments
Especially for high-impact functions like lending and fraud detection.
6.4 Strengthening Data Governance
Including:
- data minimisation
- accuracy controls
- anonymisation
- robust security protocols
- vendor due diligence
- contractual controls for model use
6.5 Building Audit Trails
Log data, model training materials, versioning decisions, and decision flows should be traceable.
6.6 Testing for Bias
Regular fairness audits must be standard practice.
6.7 Aligning with Kenya’s AI Strategy
The Strategy is explicit: AI in financial services must be accountable, explainable, inclusive, and rights-respecting.
7. The Emerging Litigation Landscape
Kenya will inevitably enter a new era of algorithmic accountability.
As banks deepen their use of AI, three things will become unavoidable:
- Customers will increasingly demand explanations for automated decisions.
- Regulators will intensify scrutiny around fairness, transparency, and data protection.
- Litigation and ODPC complaints will rise, especially where decisions affect credit access and financial inclusion.
AI is not simply a technical tool; it is a decision-making infrastructure with direct social and economic implications. Where these decisions carry risk, the law provides multiple avenues for redress.
8. Conclusion
AI is transforming Kenya’s banking sector, but governance frameworks have not kept pace with innovation. The CBK Survey and the Kenya AI Strategy send the same message: institutions must strengthen transparency, accuracy, fairness, and accountability.
For financial institutions, the priority is proactive governance.
For consumers, the priority is awareness of their rights.
For policymakers and legal practitioners, the priority is shaping a regulatory environment where innovation does not come at the expense of fairness or accountability.
Comments ()