- within Technology topic(s)
- with Finance and Tax Executives and Inhouse Counsel
- with readers working within the Automotive, Oil & Gas and Law Firm industries
- in United Kingdom
Section 1: Introduction
On November 5, 2025, the Ministry of Electronics and Information Technology ("MeitY") formally released the India AI Governance Guidelines under the IndiaAI Mission. This long awaited framework sets out a comprehensive blueprint for the ethical, safe, and responsible deployment of artificial intelligence across sectors in India. Importantly, the Guidelines do not introduce a standalone AI law; instead, they adopt a calibrated, "lightweight" and adaptive regulatory approach that builds upon existing legal frameworks while encouraging innovation with appropriate safeguards.
At the launch, IT Secretary S. Krishnan underscored this policy direction, noting that India has deliberately chosen to prioritise innovation and carefully observe global regulatory experiments before moving toward any new alegislation. He emphasised that, wherever feasible, the government will rely on established legal mechanisms rather than creating sweeping new regulatory obligations.
Broadly, the Guidelines signal the government's intent to nurture AI growth within the contours of current legal regimes, introducing new rules only when essential to protect citizens and address demonstrable risks. The final document draws extensively from the January 2025 draft report, which proposed a regulatory model centred on responsible AI development, risk-based governance, and industry-led self-regulation and was prepared under the guidance of the Principal Scientific Adviser to the Government of India, Prof. Ajay Kumar Sood, through a multi-stakeholder consultation involving government bodies, industry, academia, and civil society.
The outcome is a principle-driven, techno-legal governance framework designed to carefully balance India's "AI for All" vision with the imperatives of accountability, transparency, and safety.
The Guidelines are structured in four parts:
(1) Seven core guiding principles ("sutras") forming the philosophical foundation;
(2) Key recommendations across six governance pillars;
(3) An Action Plan with short, medium, and long-term steps; and
(4) Practical guidelines for industry and regulators on implementation.
Emphasizing Adaptation Over New Legislation
A central theme of the Guidelines is that India will not enact a dedicated AI law at this stage but will instead adapt and update existing legal frameworks to address AI-related challenges. This adaptive approach is rooted in the belief that many AI risks (such as bias, misinformation, or even deepfakes) can be managed under current statutes with some reinterpretation or amendments. For example, the committee noted that India's Information Technology Act, 2000 and even the new Bharatiya Nyaya Sanhita, 2023 (which replaces the Indian Penal Code, 1860) already contain provisions that could apply to malicious uses of AI-generated deepfakes. Likewise, the Digital Personal Data Protection Act, 2023 ("DPDP Act") governs the use of personal data for training AI models, meaning companies must obtain consent before using individuals' data to train AI, or else risk violating data protection rules. Also, the Consumer Protection Act, 2019, safeguards consumers from unfair trade practices, misleading advertisements, and service deficiencies and its provisions apply when AI-enabled systems engage in any of the stated violations. Instead of rushing a new AI-specific law, MeitY's framework will tweak definitions in existing laws to cover AI contexts. For instance, one proposal is to update the definition of "intermediary" under the IT Act, and assess how it would apply, for instance, to modern AI systems which generate data based on user prompts or even autonomously, and which refine their outputs through continuous learning. This raises the question of whether AI service providers (like generative AI platforms) should enjoy the same "safe harbour" protections as intermediaries under Section 79 of the IT Act or face new accountability obligations. The Guidelines signal that India will examine such gaps and, where needed, amend existing laws or introduce targeted amendments rather than immediately drafting an overarching AI Act. The emphasis is on "techno-legal" solution, embedding legal safeguards and ethical principles into technology design itself, so that compliance becomes automatically enforceable, reducing the need for after-the-fact enforcement.
The Guidelines also recommends a coordinated, whole-of-government approach to effectively manage AI policy and anticipate future developments. Considering AI's cross sectoral impact, limited regulatory capacity, and the absence of a dedicated regulator for emerging 3 technologies, India's AI governance would be strengthened by institutional collaboration by bringing together key ministries, sectoral regulators, and standards bodies to jointly design and implement policy frameworks aligned with the objectives of responsible AI governance.
Section 2: The Seven Guiding Principles ("Sutras") of Responsible AI
The India AI Governance Guidelines outline seven guiding principles that articulate India's AI governance philosophy. Adapted from the RBI's August 2025 Framework for Responsible and Ethical Enablement of Artificial Intelligence ("FREE-AI Committee") report, these seven 'sutras' provide AI development and risk mitigation in the financial sector outline below:
- Trust is the Foundation: Trust is foundational; without it, AI innovation and adoption cannot advance.
- People First: AI should be anchored in human-centric design, guided by human oversight, and focused on empowering individuals.
- Innovation over Restraint: Provided appropriate safeguards are in place, AI- related responsible innovation should take precedence over precautionary restraint
- Fairness & Equity: AI should promote inclusive development and avoid discrimination or bias.
- Accountability: There must be a clear allocation of responsibility and mechanisms to enforce regulations.
- Understandable by Design: AI systems should provide clear explanations and disclosures that can be understood by users and regulators.
- Safety, Resilience & Sustainability: AI systems should ensure safety, security, and resilience against systemic shocks, alongside long-term environmental sustainability.
Section 3: Key Recommendations across Six Governance Pillars
The Committee, guided by the seven sutras, proposes an AI governance approach that drives innovation and progress while mitigating risks. Governance should extend beyond regulation to include education, infrastructure, diplomacy, and institution building across six key pillars outlined below:
- Infrastructure: India's AI governance framework seeks to drive innovation, adoption, and advancement of AI while mitigating societal risks. As of August 31, 2025, over 38,000 GPUs have been made available at subsidised rates, AIKosh hosts 1,500 datasets and 217 AI models, and four startups are developing sovereign foundation models with government support. To ensure inclusive and broad-based adoption, the Committee recommends empowering ministries, regulators, and states to implement enablement initiatives and providing targeted incentives such as tax rebates, AI-linked loans through SIDBI and Mudra, and subsidised GPU access for MSMEs, supported by sector-specific toolkits. It also calls for expanding access to data and compute through market incentives, contributions to open platforms, and strong data governance frameworks to ensure fairness, transparency, and data sovereignty. Leveraging Digital Public Infrastructure ("DPI") can further enable scalable, secure, and affordable AI solutions, embedding privacy and interoperability by design. Finally, the Committee urges the creation of investment schemes across the AI value chain to position India as a global hub for AI innovation and entrepreneurship.
- Capacity Building: India has launched several capacity-building initiatives like India AI FutureSkills and FutureSkills PRIME, supporting over 500 PhD fellows, 8,000 undergraduates, and 43. 5,000 postgraduates. The Committee recommends expanding these efforts through targeted education, skilling, and training initiatives to build trust, empower citizens, and promote inclusive AI adoption aligned with India's governance principles. It further recommends increasing societal trust and awareness of AI through outreach while building capacity among officials, regulators, and law enforcement to ensure responsible use and effective response to AI-related issues. It also calls for expanding capacity-building initiatives to reach tier-2 and tier-3 cities and vocational institutes.
- Policy & Regulation: The overarching goal of India's AI governance framework is to encourage innovation, adoption, and technological progress while ensuring that actors in the AI value chain mitigate risks to individuals and society. Applicability of existing laws: The Committee has reviewed the existing system of laws and regulations in India including those governing information technology, data protection, intellectual property, competition, media, employment, consumer, and criminal law and finds that many of the risks arising from AI can be addressed through existing frameworks. For example, deepfakes used to impersonate individuals can be regulated under the Information Technology Act, 2000, and the Bharatiya Nyaya Sanhita, while the use of personal data without consent for training AI models is governed by the Digital Personal Data Protection Act, 2023. The Committee also notes the need for a comprehensive review of laws such as the Pre-Conception and Pre-Natal Diagnostic Techniques ("PC-PNDT") Act to address emerging AI applications like radiology analysis, which could be misused for unlawful sex selection. In priority sectors such as finance, where AI adoption is growing rapidly, the Committee recommends that potential regulatory gaps be promptly identified and addressed through targeted legal amendments and sector-specific rules.
There are ongoing deliberations on key areas of AI regulation, including classification, liability, data protection, content authentication, and copyright -
a) Classification and Liability: The Committee highlights that the Information Technology Act, 2000, being over two decades old, requires updates to clarify the classification of AI systems and the roles of different actors in the AI value chain such as developers, deployers, and users. The current definition of "intermediary" under Section 2(w) may not adequately capture AI systems that generate or modify content, and the Committee notes that liability across the AI value chain needs clarification, as Section 79 protections for intermediaries may not cover AI systems that generate or modify content. It recommends amending the IT Act to clearly define AI classifications, obligations, and liability frameworks for developers and deployers.
b) Data Protection: The Committee highlights emerging questions under the DPDP Act, regarding its impact on AI development. Key issues include the scope of exemptions available for the training of AI models on publicly available personal data, the compatibility of collection and purpose-limitation principles with how AI systems operate, the role of consent managers in AI workflows, and the value of dynamic and contextual notices in a world of multimodal and ambient computing.
c) Content Authentication: Generative AI tools for creating images, videos, and music offer immense opportunities for creativity and innovation but also pose serious risks of misuse, including deepfakes, child sexual abuse material ("CSAM"), and non-consensual content. The Committee recommends balancing these benefits and risks through strong content authentication and provenance mechanisms, such as watermarks and unique identifiers, aligned with global standards like Coalition for Content Provenance & Authenticity ("C2PA"). It further suggests establishing an expert committee comprising representatives from government, industry, academia, and standard-setting bodies to develop and test global standards for content authenticity. In parallel, the proposed AI Governance Group ("AIGG"), supported by the Technology & Policy Expert Committee ("TPEC"), should review India's regulatory framework and recommend techno-legal measures to combat AI- generated deepfakes effectively.
d) Copyright: Copyright remains a complex issue in AI governance, particularly with respect to generative AI systems. Following public consultations and the draft AI Governance Guidelines released in January 2025, the Department for Promotion of Industry and Internal Trade ("DPIIT") established a committee in April 2025 to examine the legality of using copyrighted works for AI training, the copyrightability of AI-generated outputs, and international best practices. This Committee notes that under Section 52 of the Indian Copyright Act, the 'fair dealing' exception for research is limited to non-commercial use and may not extend to many forms of AI training. Given ongoing global developments, including the adoption of Text and Data Mining ("TDM") exceptions in jurisdictions like the EU, Japan, Singapore, and the UK, the Committee recommends that the DPIIT committee consider a balanced framework, one that facilitates innovation through TDM while safeguarding the rights of copyright holders.
e) Global diplomacy on AI governance: AI governance is emerging as a key pillar of global diplomacy, essential for protecting national interests and technological sovereignty. India's balanced approach can guide Global South nations and strengthen its role in forums like the G20, UN, and OECD. The Committee urges proactive foresight, flexible frameworks, and continuous policy adaptation to address rapid AI advancements and evolving risks. It emphasizes the need for common standards on content authentication, data integrity, and cybersecurity, alongside
- Risk Mitigation: Risk mitigation involves
translating policy and regulatory principles into practical
safeguards to ensure AI systems are transparent, fair, and
accountable. Recognising that AI can create new risks or amplify
existing ones, the Committee emphasises developing India-specific
risk assessment and mitigation frameworks suited to its social and
economic realities. These efforts aim to balance innovation with
safety, particularly by addressing harms that affect vulnerable
groups such as children and women, and by embedding trust and
accountability into India's AI ecosystem. The Guidelines also
highlight the Data Empowerment and Protection Architecture
("DEPA") as a model for embedding legal safeguards into
AI system design. A proposed "DEPA for AI Training"
framework could enable privacy-preserving, consent-based data
sharing during AI model development, ensuring transparency,
accountability, and compliance by design.
To operationalise this, the Committee recommends a comprehensive approach that includes: (i) developing a risk classification framework for India, (ii) creating a national AI incident database for real-time monitoring and empirical analysis, (iii) promoting voluntary frameworks like industry codes and self-certifications to encourage responsible innovation, (iv) adopting techno-legal approaches that embed compliance within system design, including DEPA for AI training, and (v) ensuring human oversight and automated safeguards to prevent loss of control. Together, these measures create a layered risk mitigation system, combining legal, technical, and ethical tools, to promote safe, trustworthy, and inclusive AI governance in India. - Accountability: Implement a graded accountability and liability regime across the AI value chain. The Guidelines acknowledge that AI involves many actors from developers to deployers and end-users and accountability should be apportioned based on one's role, the level of AI autonomy, and the risk involved. They propose a "graded liability" system: higher-risk AI activities or those without due diligence would carry greater responsibility for the entity involved. To strengthen accountability beyond legal tools, the Committee recommends complementary mechanisms such as transparency reports (to disclose impact assessments and mitigation steps), committee hearings (for regulatory and parliamentary scrutiny), self-certifications (through auditors or standards bodies), peer monitoring (by competitors and civil society), internal policy commitments, and techno-legal safeguards that embed compliance within system design. Importantly, the guidelines reiterate that existing laws remain enforceable. The Guidelines also push for greater transparency about AI systems, companies should publish AI transparency reports and disclose how they operate and manage AI risks, enabling regulators and the public to scrutinize compliance. Additionally, grievance redressal mechanisms must be established by AI service providers so that individuals can report AI-related harms or concerns easily. Organisations should maintain accessible, multilingual grievance redressal systems that respond promptly and use feedback to improve AI products. These mechanisms must operate independently from the proposed AI Incidents Database. All these measures aim to make AI deployments accountable and responsive to oversight. The Committee recommends clarifying how developers, deployers, and end-users are governed under existing laws like the IT Act, with obligations proportionate to their role and risk. It calls for clear enforcement guidance, effective grievance redressal with feedback loops, and voluntary accountability measures such as self-certifications and audits. Greater transparency across the AI value chain is also essential for informed and consistent regulatory oversight.
- Institutions: Create a coordinated institutional framework for AI governance, without concentrating all authority in a single regulator. The Guidelines recommend a "whole-of-government" approach where multiple bodies share responsibility for AI governance. Key institutional proposals include:
a) AI Governance Group: The AIGG, chaired by the Principal Scientific Adviser, will act as a small yet effective decision-making body responsible for policy coordination, accountability, and oversight of AI governance across the public and private sectors. It will review existing mechanisms, address regulatory gaps, and guide legislative or policy reforms where necessary. The group will work closely with sectoral regulators such as the RBI, SEBI, and ICMR and be supported by the MeitY, which will serve as the nodal ministry. The AIGG will also be assisted by a TPEC, which will provide strategic expertise to support its policy development and implementation functions.
b) Technology & Policy Expert Committee: The TPEC will be established by MeitY to provide specialised technical and policy expertise to the AIGG. It will function as an advisory body, ensuring that India's AI governance decisions are informed by cutting-edge research, innovation insights, and strategic foresight. Comprising a small group of experts from diverse domains such as frontier technology R&D, engineering, data science, law and public policy, public administration, and national security, the TPEC will advise the AIGG on key issues of national importance. Its mandate includes assessing new and emerging AI capabilities, identifying potential risks and regulatory gaps, tracking global developments in AI governance, and supporting India's international diplomatic engagements on AI-related matters.
c) AI Safety Institute: The AISI should serve as the central body guiding the safe and trusted use of AI in India through research, risk assessment, and capacity-building. It should test AI systems, advise policymakers and industry, and continue work under the IndiaAI Mission on bias mitigation, explainable AI, and privacy tools. Operating on a hub-and-spoke model, AISI should coordinate with regulators, identify emerging risks, and develop guidelines, standards, and testing frameworks. It should also foster public-private partnerships, build institutional capacity, and promote AI safety tools. Further, AISI should represent India in global forums and support TPEC and AIGG with risk assessments and policy advice.
d) Sectoral Regulators and Ministries: The existing regulators for various sectors (such as the Reserve Bank of India for fintech, SEBI for securities, TRAI for telecom, etc.) are expected to incorporate the AI governance principles into their domain regulations. Rather than creating a single super-regulator for AI, India's approach is to empower sectoral regulators to handle AI issues in their respective industries, since they possess domain expertise.
e) Standards Bodies: Institutions like the
Bureau of Indian Standards ("BIS") and other technical
standardization committees (e.g. Telecom Engineering Centre for
telecom standards) are identified as key players in the governance
framework. They are responsible for developing AI-related
standards, including risk taxonomies and certification norms,
engaging with global standard-setting organizations, and
standardizing procedures for testing, assessment, evaluation, and
validation to ensure consistency, safety, and reliability in AI
deployment across sectors.
Section 4: Phased Implementation: Short, Medium, and Long- Term Roadmap
The Guidelines lay down a concrete Action Plan that provides short-term, medium-term, and long-term steps to operationalize the framework. This phased roadmap is important for businesses and stakeholders to understand the timeline of changes and prepare accordingly. Key action items in each timeframe and the expected outcomes are as follows:- Short-Term Action Items:
-
- Establishing the AIGG as a permanent high-level policy-making body, supported by the TPEC.
- Developing India-specific AI risk assessment and classification frameworks with sectoral inputs.
- Conducting regulatory gap analyses and suggesting legal amendments.
- Adopting voluntary frameworks to promote responsible innovation.
- Publishing a master circular with applicable regulations and best practices.
- Preparing an AI incidents database and grievance redressal mechanisms.
- Developing clear liability regimes across the AI value chain.
- Expanding access to foundational infrastructure such as data, compute, and models.
- Conducting public awareness and training programs for citizens and regulators on AI capabilities and risks.
- Operationalizing safe and trusted tools in areas such as bias mitigation, privacy-enhancing tools, and deepfake detection.
Expected Outcomes from Short-Term Action Items:
- Establishment of strong institutions to coordinate AI governance.
- Development of risk classification and mitigation frameworks tailored to the Indian context.
- Promotion of a culture of voluntary industry compliance.
- Enhancement of understanding of regulatory gaps and needs.
- Building robust infrastructure for incident reporting and grievance redressal.
- Improvement of societal trust and literacy regarding AI technologies.
- Medium-Term Action Items:
-
- Publishing common standards for content authentication, data integrity, fairness, and cybersecurity.
- Operationalising a national AI incidents database with localized reporting and feedback loops.
- Amending laws to address regulatory gaps.
- Piloting regulatory sandboxes in high-risk domains.
- Supporting the integration of DPI with AI through appropriate policy enablers.
Expected Outcomes from Medium-Term Action Items:
- A mature, standardized governance framework.
- A safe experimentation environment for innovation.
- Broader adoption of DPI-enabled AI systems.
- Easier compliance through guidance and updated laws.
- Effective grievance redressal mechanisms for citizens.
- Long-Term Action Items:
-
- Continuous review and monitoring of the governance framework and activities under the Action Plan.
- Adopting new laws to address emerging risks and capabilities.
- Expanding global diplomatic engagement and contributing to international standards development.
- Conducting horizon-scanning and scenario planning to prepare for future risks and opportunities.
Expected Outcomes from Long Term Action Plan:
- A mature, balanced, and agile legal framework.
- Enhanced international credibility in AI governance leadership.
- An effective accountability system to address AI-related harms.
- A future-ready governance system capable of handling emerging risks.
Section 5: Practical Guidelines for Industry and Regulators
For Industry (AI Developers & Deployers): The Guidelines urge companies working with AI to proactively adopt responsible AI practices now. In particular, industry actors should:
- Comply with all existing Indian laws applicable to their AI operations: This includes information technology, data protection, copyright, consumer protection, offences against women, children, and other vulnerable groups that may apply to AI systems.
- Adopt voluntary AI governance frameworks and ethical codes: Encourage the adoption of voluntary principles, codes of conduct, and technical standards that promote privacy and security, fairness and inclusivity, non-discrimination, transparency, and other essential organisational safeguards for responsible AI development and deployment.
- Publish transparency reports: Publish transparency reports assessing the potential risks and harms of AI to individuals and society in the Indian context, ensuring that any sensitive or proprietary information is shared confidentially with the relevant regulators.
- Provide accessible grievance redressal mechanisms for users impacted by AI: If an AI system causes harm or an error, users should have a clear avenue to raise complaints and seek a resolution within a reasonable timeframe.
- Mitigate risks through techno-legal solutions: Promote the adoption of techno-legal solutions to mitigate AI risks, including privacy-enhancing technologies, machine unlearning, algorithmic auditing, and automated bias detection mechanisms.
Section 6: For Regulators and Government Agencies
The Committee recommends the following guiding principles for policy formulation and implementation by agencies and sectoral regulators within their respective domains:
- Support innovation while addressing real harms: The twin goals of any proposed AI governance framework are to promote innovation, adoption, and equitable distribution of AI's benefits to society, while simultaneously addressing potential risks through effective policy measures.
- Avoid overly compliance-heavy regimes: Proposed AI governance frameworks should avoid imposing compliance-heavy requirements, such as mandatory approvals or licensing conditions, except where they are deemed strictly necessary.
- Promote techno-legal approaches: Regulators should promote the adoption of techno-legal approaches to achieve policy objectives such as privacy, cybersecurity, fairness, and transparency, especially in areas where relevant policy measures are already established.
- Flexible and Agile Governance Frameworks: Governance frameworks should be designed to be flexible and agile, allowing for periodic reviews, continuous monitoring, and recalibration based on stakeholder feedback.
- Risk-Based Prioritization in Policy Implementation: Regulators should prioritize policy interventions that address real and immediate risks, particularly those posing threats to life, livelihood, or overall well-being.
- Choice of Policy Instruments: The relevant regulator or agency should select the most suitable and least burdensome policy instrument, such as industry codes, technical standards, advisories, or binding rules to effectively achieve the intended objective.
Section 7: Implications for Businesses and Next Steps
While the Guidelines themselves are non-binding, they are highly indicative of the government's expectations and future regulatory plans. Businesses should take note of several implications:
- No New Law, but Vigilance is required: The absence of a new AI Act is not a free pass to ignore governance. Regulators will be looking closely at how industries implement these voluntary guidelines. Firms that proactively align with the principles (e.g., conducting algorithmic audits, documenting AI decision processes, training staff on AI ethics) will be better positioned, both in terms of compliance and reputationally, as trustworthy AI providers.
- Existing Laws Apply to AI Uses: Businesses must review how current laws like the IT Act, DPDP Act, Consumer Protection Act, etc., apply to their AI operations. For instance, using personal data to train AI without consent could lead to DPDP Act penalties; an AI- powered content platform might suddenly find itself liable for user-generated misinformation if not protected by safe harbour. The Guidelines highlight such intersections so companies can address them in advance (e.g., obtaining proper consent for training data, or implementing content moderation on generative AI outputs to avoid illegal content).
- Prepare for Sectoral Guidelines: If you operate in a regulated sector (finance, healthcare, telecom, etc.), anticipate that your regulator will likely issue AI-specific guidance or requirements in line with these national principles. Early engagement with regulators, perhaps via industry bodies, could help shape workable rules.
- Implement Governance Structures Internally: Consider setting up an AI governance committee or a responsible AI officer within your organization. Such internal governance can oversee adherence to ethical principles, handle risk assessments, and interface with external auditors or regulators. Some large companies globally have instituted AI ethics boards, Indian companies may want to follow suit to signal commitment to the "Trust" and "Accountability" sutras.
- Leverage the Ecosystem: The government is establishing infrastructure like the AI Safety Institute and possibly open data platforms. Companies should utilize these, e.g., engage with AISI on best practices or participate in regulatory sandbox programs when available. They can also follow standards coming from BIS or AISI in their product development to stay ahead of compliance curves.
- Monitor the Roadmap: Keep an eye on the short-term milestones. For instance, if a master circular of applicable AI regulations and best practices is published (as hinted in the action plan), be sure to obtain and implement its guidance. Similarly, once the AI incident database is up, consider how you will report any incidents and what internal processes you need for that. Staying informed about each step of the government's implementation (via PIB releases or industry seminars) will help avoid surprises.
Key Considerations
- The Innovation-Safety Trade-off: The Guidelines emphasize 'innovation over restraint,' advocating a light-touch regulatory approach to enable rapid AI deployment. However, high-risk applications in areas such as criminal justice, lending, healthcare, or employment could result in significant harm if adequate safeguards are not in place. Documented risks, including deepfakes, algorithmic bias, and potentially fatal autonomous accidents, may not be fully mitigated through voluntary compliance alone. The effectiveness of India's approach will ultimately need to be evaluated empirically.
- Voluntary Compliance and Regulatory Enforcement: The Guidelines rely heavily on voluntary commitments and industry self-regulation rather than on mandatory, penalty-backed requirements. While the Guidelines envisage a gradual transition from voluntary to mandatory measures, the absence of defined timelines creates potential risks if harms accumulate during this period and corrective actions are retrospective rather than preventive.
Conclusion
The release of the India AI Governance Guidelines is a significant milestone in the country's journey to harness artificial intelligence for inclusive growth while ensuring it remains safe and trustworthy. By articulating clear principles and a structured action plan, the government< has given stakeholders a roadmap of what to expect in AI policy over the coming years. In the coming months, we can expect further clarity as the recommended bodies (AIGG, AISI, etc.) will be set up, regulatory gap analysis will be conducted with corresponding amendments to laws, and a master circular will be published with applicable regulations and best practices to support compliance.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.