Introduction

Artificial intelligence (AI) is gaining widespread acceptance in India’s financial sector. The technology is being used in customer service(chatbots), credit scoring, fraud monitoring, lead creation, churn prediction and operations automation. The RBI bulletin in October 2024 warned that the rising use of AI brings challenges related to bias and ethical use of data. This occurs even as AI helps cut down on human errors. The regulator has set up a committee to develop a Framework for Responsible and Ethical Enablement (FREE) of Artificial Intelligence (AI) in the Financial Sector to address the same. In this backdrop, banks must proactively address ethical issues and establish robust governance frameworks for AI.

AI adoption in Indian banks

Indian banks are leading the way in using AI to make banking smarter and more efficient. For example, almost all major banks have deployed AI-powered chatbots like State Bank of India’s SBI Intelligent Assistant (SIA), HDFC Bank’s EVA (Electronic Virtual Assistant), Axis Bank’s Aha, Kotak’s Keya and Canara Bank’s AURA (Always Up for Reliable Assistance), among others. These chatbots handle everything from answering queries and providing account statements to issuing cheque books and blocking cards.

Banks also use AI for fraud detection (e.g., transaction monitoring), credit underwriting (alternate credit scoring), compliance (AML / KYC checks) and branch automation. Reserve Bank Innovation Hub (RBIH) has even developed ‘MuleHunter.AI’, an in-house AI / ML solution to detect suspected mule accounts. At the same time, regulators encourage innovation under safeguards. The RBI, while encouraging AI use in KYC / AML monitoring and credit scoring, has said that the models deployed should be reviewed by the regulated entities at periodic intervals and are also subject to supervisory review. The Ministry of Electronics and Information Technology (MEITY) has advised firms to explicitly label AI outputs and guard against unlawful or misleading content. In short, AI is becoming mainstream in Indian banking, but its adoption raises new ethical and governance questions that banks must address.

Risks of AI in banking

The following are key risks associated with AI implementation in banking.

1. Inherent bias
AI models trained on historical data can inherit and amplify existing biases. For example, credit-scoring algorithms that use alternative data (social media, transaction patterns, number of contacts, etc.) could disproportionately exclude women and marginalised groups.

2. Lack of explainability
Many AI models (especially complex machine learning or deep learning systems) behave as ‘black boxes’, making decisions without an easy explanation. This can be problematic in the context of banking, particularly when customers get denied a loan by an AI system that may not understand why, eroding trust.

The issue is magnified when advanced AI models like large language models (LLMs) are so complex that even their own creators often cannot trace how inputs become outputs. There is often a tension between model complexity and interpretability, as simpler models are easier to explain but may be less accurate, while complex models perform better but sacrifice explanation.

3. Data privacy concerns
AI relies on vast amounts of data, often including sensitive personal and financial information. Under India’s new Digital Personal Data Protection (DPDP) Act, 2023, banks must obtain explicit, informed consent from customers before processing their personal data. The Act mandates that data be used only for specified purposes to which the customer agreed.

This has significant implications as an AI model trained on customer data cannot be repurposed for other services without fresh consent. Breaching can lead to legal penalties and loss of customer trust.

4. Hallucinations and inaccuracies
Generative AI, like large language models, can produce outputs that sound plausible but are factually incorrect. This phenomenon is known as LLM hallucination. This is because LLMs generate words based on probabilities, not verification. They don’t know facts but just statistically predict what comes next based on patterns in training data. In a banking context, an AI chatbot or advisor giving wrong financial advice could mislead customers or violate regulations.



To read more, please subscribe.