ARTICLE
26 August 2025

Navigating Responsible Innovation: SEBI's Blueprint For AI/ ML In Indian Securities Markets

SA
Sarthak Advocates & Solicitors

Contributor

Sarthak Advocates & Solicitors is a pan India law firm with a strong focus on corporate and commercial laws. Our firm delivers advisory, transaction support services along with strategizing, and representing clients in disputes and arbitration. Our values and aspirations are derived from the word ‘Sarthak’, which in the Sanskrit language means ‘meaningful’.
The Securities and Exchange Board of India ("SEBI") has unveiled a forward-looking consultation paper dated June 20, 2025, titled ‘Guidelines for Responsible Usage of AI/ ML in Indian...
India Technology

The Securities and Exchange Board of India (“SEBI”) has unveiled a forward-looking consultation paper dated June 20, 2025, titled ‘Guidelines for Responsible Usage of AI/ ML in Indian Securities Markets' (“Consultation Paper”). With the Indian financial ecosystem rapidly evolving, this initiative aims to shape a governance framework that balances the transformative power of Artificial Intelligence (“AI”) and Machine Learning (“ML”) with the need for fairness, transparency and accountability.

Introduction: Embracing AI/ML in the Financial Landscape

From intelligent chatbots and trading algorithms to advanced surveillance and risk assessments systems, AI and ML are rapidly transforming the landscape of capital markets. This transformation has been propelled by the growing availability of high-quality data, exponential gains in computational power and significant improvements in both software and hardware infrastructure. The emergence of Generative AI (Gen AI) and Large Language Models (LLMs) has further unlocked unprecedented opportunities for market participants in the financial sector. Today AI and ML are being currently employed by market participants across a wide array of functions including advisory and support services, risk management, client identification and monitoring, surveillance, pattern recognition, internal compliance, cybersecurity etc. Yet, as the influence of these technologies deepens, so does the responsibility to deploy them ethically, transparently and with robust oversight.

Rationale: Optimizing Benefits While Mitigating Risks

While AI/ ML offers game-changing potential to enhance productivity, streamline operations, and deliver sharper outcomes, its unchecked use can just as easily open the floodgates to unintended risks. From algorithmic biases and opaque decision-making to cybersecurity vulnerabilities and vendor dependencies, these technologies, if not governed properly, could disrupt market integrity and put investor trust at stake. Recognizing this dual-edged nature, SEBI underscores the urgent need for a robust oversight framework. This Consultation Paper proposes a set of high-level guiding principles to ensure that AI/ ML adoption in the securities market is not just innovative, but also responsible and resilient. It addresses a spectrum of emerging risks, ranging from fairness and bias, accountability and transparency, to cyber risk and third-party oversight, while outlining practical measures to mitigate them and reinforce trust in AI-driven finance.

Key Recommendations: Laying the Groundwork for Responsible AI and ML

\To chart a clear path forward, SEBI constituted a dedicated working group tasked with examining the responsible use of AI/ ML in the Indian capital markets. Drawing insights from global best practices, extensive consultations with market participants and expert inputs from the Committee on Financial and Regulatory Technologies (CFRT), the group has laid the foundation for a forward-thinking regulatory approach. The result is a set of guiding principles designed to ensure that AI/ ML applications in the securities market are not only innovative but also transparent, accountable, and aligned with investor protection goals. This Consultation Paper distills these insights into 5 (five) key pillars for responsible AI/ ML deployment.

  1. Model Governance:

    The working group emphasizes that market participants must establish dedicated internal teams with the necessary expertise to oversee the full lifecycle of AI/ ML models including development, validation, deployment, performance monitoring, and ongoing testing. These models must be auditable, explainable, and thoroughly documented, with version control and the ability to replay outcomes for diagnostic purposes. A key recommendation proposed by the working group is to appoint senior management with relevant technical knowledge to take ownership of AI/ ML oversight. The governance structure must also include risk controls, exception handling, and fallback mechanisms to ensure critical functions continue even during technical disruptions.

    Given the dynamic nature of AI especially models learning from live data, market participants should conduct periodic performance reviews, implement continuous monitoring and regularly share accuracy results with SEBI. Where third-party AI/ ML services are used, market participants must establish formal service-level agreements with clear deliverables, performance indicators and remedies for underperformance while maintaining full responsibility for compliance. Additional measures include:
    1. Clearly defined data governance protocols covering ownership, access, and encryption.
    2. Independent audits by teams uninvolved in development to ensure transparency and fairness.
    3. Secure log retention for event tracking and model behaviour analysis.
    4. Designing AI systems that respect user autonomy, uphold ethical values, and comply with all legal and regulatory obligations

  2. Investor Protection - Disclosure:

    For AI/ML tools that directly affect clients such as algorithmic trading, portfolio management, advisory and support services etc, SEBI proposes a strong focus on disclosure and transparency. Market participants must inform clients about their use of AI/ML tools to build trust, accountability, and clarity. Disclosures should cover:
    1. Product features, intended purpose, limitations, and potential risks.
    2. Accuracy results and performance metrics of the AI/ML model.
    3. Fees or charges associated with AI-driven services.
    4. Quality and relevance of data powering AI-based decisions

All information must be communicated in clear, comprehensible language, avoiding technical jargon to ensure that investors can make well-informed choices. Furthermore, SEBI recommends that market participants align their investor grievance redressal mechanisms for AI/ML services with the regulator's existing framework, ensuring clients continue to have access to timely and fair resolutions.

  1. Testing Framework:

    Unlike static systems, AI/ ML models evolve over time, often adjusting based on new data inputs and changing market conditions. Recognizing this, the working group has proposed a multi-layered testing strategy to ensure that models remain accurate, reliable, and compliant throughout their lifecycle. Before deployment, AI/ ML models must be tested in segregated environments, isolated from live operations, to evaluate performance under both stressed and unstressed conditions. In addition to traditional testing methods, shadow testing using live market traffic is recommended, allowing firms to gauge real-time model behaviour without exposing clients or systems to risk. Post-deployment, continuous monitoring becomes essential as AI/ ML models can change behaviour in unforeseen ways as they learn and adapt over time. Regular validation is necessary to detect unexpected drifts or anomalies resulting from subtle shifts in operating conditions or noisy inputs. To support long-term accountability, market participants must maintain comprehensive documentation of each model, capturing input/ output data, logic flows, and all development iterations for a minimum of period of 5 (five) years. This ensures that model outcomes remain explainable, traceable, and auditable, even long after they are first deployed.

  2. Fairness and Bias:

    In the realm of finance, fairness is not just a moral imperative, it is a regulatory necessity. The working group has emphasized that AI/ ML models must be free from discrimination, especially in client-facing applications such as credit profiling, investment recommendations, or onboarding processes. Market participants must ensure that the data used to train these models is representative, relevant, and of high quality. Poor or biased data can skew outcomes, reinforcing inequalities and undermining investor trust. To address these issues, it has been recommended that market participants must conduct bias awareness training for data scientists and AI/ ML teams.

  3. Data Privacy and Cybersecurity:

    Given that AI/ML systems thrive on vast amounts of data, often including sensitive investor information, the working group has laid out clear expectations for data protection and cyber resilience. Every market participant deploying AI must establish robust policies addressing:
    • Data privacy and encryption
    • Cybersecurity controls tailored to AI risks
    • Access management and data ownership protocols.

Additionally, all data collection, storage, and processing must comply with relevant data protection laws, including consent, usage boundaries, and retention norms. In the event of technical glitches or data breaches, market participant must follow established protocols to notify SEBI and relevant authorities such as Indian Computer Emergency Response Team (Cert-In) and upcoming Data Protection Board of India promptly, ensuring transparency and swift containment of risks.

A Calibrated Compliance Path: SEBI's Tiered Approach to AI/ ML Oversight:

Recognizing the diverse use cases and risk profiles of AI/ ML applications in the securities market and the varying degrees of risk they pose, it has been recommended by the working group to adopt a tiered compliance framework. This approach ensures that regulatory obligations remain proportionate to the nature and impact of the AI/ ML system in use, thus enabling both innovation and investor protection to coexist. Under this framework, it is proposed to distinguish between AI/ML systems that have a direct impact on clients and investors and those that are used purely for internal operational efficiency.

  1. Full Compliance for Client-Facing Use Cases: AI/ ML applications that directly influence investor decisions or client outcomes such as algorithmic trading, portfolio management, or robo-advisory will fall under the full scope of these proposed guidelines. Given their high impact on market fairness, transparency, and investor trust, such use cases must adhere comprehensively to all five pillars outlined in the Consultation Paper, covering: This ensures that systems influencing real-time financial decisions are developed and deployed within a robust regulatory perimeter.
  2. ‘Regulatory Lite Framework' for Internal-Only Applications: In contrast, for AI/ ML tools used strictly for internal purposes such as compliance automation, risk monitoring, trade surveillance, this Consultation Paper proposes a lighter regulatory framework. These applications, while integral to operational resilience, do not directly impact investors and therefore can be subject to a more streamlined oversight regime. In such cases, only a subset of the proposed obligations would apply. This includes core elements of model governance such as internal team competency and senior oversight, ethical alignment, a robust testing framework to ensure, data privacy and cybersecurity standards in line with applicable laws.

This differentiated approach not only avoids regulatory overreach in low-risk areas but also ensures heightened accountability where investor interests and financial stability are at stake.

Confronting the Dark Side of Innovation: Addressing AI/ML Risks in Capital Markets:

As promising as AI/ ML models are, their integration into the financial ecosystem also brings with it a set of unprecedented risks—many of which could potentially compromise market stability and investor confidence if left unchecked. This Consultation Paper takes a proactive stance by identifying these threats and laying down strategic guardrails to mitigate them.

  1. Malicious usage leading to market manipulation and/or misinformation: Control measures include watermarking and provenance tracking of AI-generated content, encouraging suspicious activity reporting, and public awareness campaigns.
  2. Concentration Risk: To mitigate reliance on a limited number of Gen AI providers, measures include periodic reporting of third-party vendors to the regulator, encouraging diversification of AI suppliers, and enhanced monitoring of critical vendors and AI applications provided by them.
  3. Herding and Collusive Behaviour: Promoting diverse AI models and data sources, monitoring for herding behaviour by stock exchanges, algorithmic auditing, and implementing circuit breakers are suggested.
  4. Lack of explainability: Solutions include requiring detailed AI process documentation, using interpretable AI models or explainability tools, and mandating human review of AI outputs.
  5. Model failure / runaway AI behaviour: Stress testing, implementing volatility controls like circuit breakers and kill switches and mandating human oversight and accountability for AI-driven decisions are proposed.
  6. Lack of Accountability and Regulatory Non-Compliance: This can be addressed through regulatory sandboxes for testing AI systems, human oversight and accountability mechanisms, and staff training on potential compliance risks.

The Path Ahead: Ushering in Responsible AI in Capital Markets

SEBI's proposed framework signals a transformative shift in the governance of AI and ML within India's securities market. By introducing structured, principle-based guidelines, it sets the stage for a future where technological innovation is balanced with accountability, transparency, and investor protection. The framework's emphasis on model governance, robust testing, and ethical deployment will push market participants to adopt stronger internal controls and take greater responsibility for AI systems, particularly in high-impact, client-facing functions. The tiered compliance approach adds necessary nuance ensuring that regulatory scrutiny is proportionate to risk, without stifling low-risk innovation. Key safeguards around data privacy, cybersecurity, and third-party oversight will enhance market integrity and systemic resilience, while proactive measures against bias, manipulation, and explainability gaps will foster trust in AI-driven decision-making. In essence, this marks a new era for Indian capital markets, one where intelligent systems are deployed not just efficiently, but also ethically and securely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More