The Ministry of Electronics and Information Technology ("MeitY") recently released a report titled "AI Governance Guidelines Development" (click here) ("Report") for public consultation on January 6, 2025. The report underscores the need for a unified, whole-of-government approach to ensure compliance and effective governance as India's AI ecosystem continues to expand. Created by a subcommittee established by MeitY on November 9, 2023, the report is part of a larger initiative led by India's Principal Scientific Advisor ("PSA"). Under the guidance of a multistakeholder Advisory Group, the subcommittee addressed key issues, identified gaps, and proposed actionable recommendations for the ethical and responsible development of AI in India.
Artificial Intelligence ("AI") has emerged as a game-changer in recent years, sparking innovation across sectors and offering transformative potential for economic growth and societal advancement. However, with great power comes great responsibility. As AI continues to evolve, the need for robust governance mechanisms has become more pressing, especially in a diverse and rapidly developing country like India. The Report emphasizes the need for a unified, whole-of-government approach to ensure ethical and accountable AI systems.
A. The sub-committee has identified three key concepts to guide the operationalization of AI principles, which are crucial for shaping AI governance in India.
- Life-cycle Approach - This approach involves
examining AI systems at different stages: development, deployment,
and diffusion. Each stage presents distinct risks and challenges
that need to be addressed.
- Development: Focus on the design, training, and testing phases.
- Deployment: Examine the implementation and operational use of AI systems.
- Diffusion: Evaluate the long-term impact of widespread AI adoption across sectors.
- Ecosystem Approach - Multiple actors can be
involved across the lifecycle of any AI system. Together, they
create an ecosystem including:
- Data Principals
- Data Providers
- AI Developers (including Model Builders)
- AI Deployers (including App Builders and Distributors)
- End-users (including both businesses and citizens).
Traditional governance approaches can be limited if they focus on one particular set of actors in isolation. By looking at governance across the ecosystem, better and holistic outcomes are obtained. An ecosystem-view of actors could also look to clarify the distribution of responsibilities and liabilities between different actors involved in the ecosystem.
- Techno-Legal Approach for Governance -The
rapid advancement of AI technologies has created a complex
ecosystem involving diverse models, applications, and stakeholders
across various domains. Traditional "command-and-control"
governance approaches are no longer sufficient to oversee this
rapidly evolving landscape. A techno-legal strategy integrates
regulatory frameworks with technology-driven tools to ensure
effective monitoring, compliance, and risk mitigation. For
instance, blockchain tracking can provide traceability for
AI-generated content, while AI compliance systems can automate the
detection of biases or harmful outputs in real-time. A tool like
consent artifacts—borrowed from MeitY's Electronic
Consent Framework—could assign immutable identities to
ecosystem actors, allowing activities to be tracked and liability
chains to be established. This creates an environment where
accountability is shared, fostering self-regulation, and reducing
the burden on traditional enforcement mechanisms.
However, these automated solutions must undergo periodic reviews to ensure they remain secure, accurate, fair, and compliant with fundamental rights like freedom of speech and privacy. By embedding these tools into a "digital by design" governance framework, regulators can not only ensure compliance but also promote innovation by providing clarity and flexibility to developers and stakeholders. This approach balances robust oversight with adaptability, enabling India's AI ecosystem to thrive responsibly while addressing risks effectively.
B. Gaps in India's AI Governance Framework
The Report highlights several critical challenges in the current legal and regulatory setup for managing AI in India. One of the primary issues is compliance and enforcement. While existing laws such as the IT Act and sectoral regulations address certain aspects of AI, they fall short of comprehensively managing AI-specific risks. For instance, the use of deepfakes, though penalized under current laws, lacks robust mechanisms for detection and prevention.
Cybersecurity frameworks also need to be upgraded to match the rapid advancements in AI technology. Intellectual property rights ("IPR") present another challenge, particularly concerning the use of copyrighted data for training AI models and determining ownership of AI-generated works. Bias and discrimination in AI systems add another layer of complexity, as current laws do not adequately address these issues.
Transparency and accountability are also significant concerns. The lack of mechanisms to trace data, models, and actors across the AI lifecycle makes it difficult to assign responsibility for outcomes or risks. Moreover, the fragmented approach to regulation, with various regulators working in silos, creates inefficiencies and leaves gaps in addressing cross-sectoral risks posed by AI systems.
C. Recommendations to Strengthen AI Governance
To address these gaps, the Report provides 6 (six) key recommendations aimed at creating a trustworthy and accountable AI ecosystem in India:
- Establish a Whole of Government Coordination Mechanism - The Report proposes forming an Inter-Ministerial AI Coordination Committee or Governance Group, led by the Principal Scientific Advisor and MeitY. This body would bring together key regulators and government departments to develop a unified governance roadmap, harmonize efforts, and ensure efficient regulation. This aligns with Rule 22 of the DPDP Rules, which emphasizes collaboration between authorized persons and fiduciaries for data-related purposes, such as addressing national security and sovereignty concerns. A unified governance body would ensure data protection standards and AI governance are harmonized across government agencies.
- Create a Technical Secretariat for AI Governance - A dedicated Technical Secretariat would serve as an advisory body and coordination hub for the governance group. It would pool multidisciplinary expertise, map India's AI ecosystem, conduct risk assessments, and develop standards and frameworks for responsible AI use.
- Build an AI Incident Database - To monitor real-world AI risks, the Secretariat would establish a database to document incidents such as system failures, privacy violations, and discriminatory outcomes. This repository would help regulators and stakeholders understand patterns and devise mitigation strategies. Even the Rule 7 of the DPDP Rules requires data fiduciaries to notify affected principals and the Board of any personal data breaches. An AI incident database would align with this requirement, enabling a centralized system for monitoring and mitigating risks.
- Encourage Voluntary Industry Commitments - The Report emphasizes the importance of industry self-regulation. Developers and deployers of AI systems should commit to transparency through practices like releasing model cards, conducting red-teaming exercises, and disclosing system capabilities and risks. These commitments would complement government regulations.
- Leverage Technological Solutions for Governance - Technological tools, such as watermarking and labelling, could be used to improve traceability and accountability in the AI ecosystem. These solutions would enable real-time tracking of AI outputs and help identify and address risks effectively.
- Integrate AI-Specific Measures into the Digital India Act - The upcoming Digital India Act ("DIA") should incorporate provisions for addressing AI-related risks. This includes enhancing grievance redressal mechanisms, enabling "digital by design" adjudication systems, and ensuring robust regulatory oversight for AI systems.
Conclusion
AI holds immense promise for India, but unlocking its full potential requires a governance framework that ensures trust, accountability, and inclusivity. The Report highlights the critical need for a unified, whole-of-government strategy to address gaps in India's AI governance framework. The gaps identified in the Report underscore the urgency of adopting a coordinated and proactive approach. The six recommendations provide a clear roadmap for achieving this, focusing on collaboration, transparency, and the integration of technological and regulatory solutions. By implementing these measures, India can build a robust AI ecosystem that not only drives innovation but also safeguards public interests and fosters equitable growth.
Please find a copy of the Report on AI Governance Guidelines Development, here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.