- within Technology topic(s)
- with Senior Company Executives, HR and Finance and Tax Executives
- in United States
- with readers working within the Retail & Leisure, Telecomms and Law Firm industries
Artificial Intelligence (AI) is now deeply embedded in India's digital economy, from banking, logistics, healthcare, education, retail, recruitment, and consumer services to enterprise software, cybersecurity, and public administration. In response to the rapid adoption of AI, the Ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines under the IndiaAI Mission in November 2025. These Guidelines, though non-binding, are set to reshape the compliance ecosystem for both entities that build AI and those that use it.
A striking feature of this framework is that it does not merely regulate AI developers. By explicitly encompassing AI Deployers, MeitY has brought almost every business using AI tools, directly or indirectly, under the umbrella of AI governance. This has broadened the scope dramatically, creating layered compliance responsibilities across industries.
This article examines who qualifies as AI Companies and AI Deployers, assesses the impact of MeitY's Guidelines, evaluates the resulting compliance burden, and outlines the broad array of laws that AI entities must navigate in India's fragmented regulatory landscape.
- Defining AI Companies and AI Deployers
- AI Companies (Developers and Model Creators)
An AI Company, whether a technology enterprise, research institution, or software provider, is any organisation that builds, trains, tests, or supplies artificial intelligence models, algorithms, datasets, or AI-powered tools. This category covers a wide range of entities, including generative AI companies working with text, images, audio, or video; machine-learning platform developers; model-as-a-service providers offering APIs or cloud-hosted models; dedicated research labs engaged in foundational or applied AI research; companies developing proprietary models for internal optimisation or external commercial deployment; and software firms embedding AI capabilities into mainstream products. Regardless of size or sector, these companies bear responsibility for addressing model-level risks, including ensuring the legality of training data, preventing algorithmic bias, maintaining technical robustness and safety, implementing strong security controls, ensuring transparency and explainability, and putting in place safeguards to prevent misuse of their AI systems.
- AI Deployers (Entities Using or Implementing AI)
An AI Deployer is any natural or legal person, whether in the public or private sector, that implements, integrates, uses, or makes an AI system available in a real-world operational environment. Importantly, an organisation does not need to develop the underlying AI model to qualify as a deployer; the mere act of using AI within its processes, services, or decision-making functions is sufficient. This category includes a wide range of entities such as banks using AI for credit scoring, e-commerce platforms deploying recommendation engines, HR technology platforms utilising AI-driven hiring tools, hospitals relying on AI for diagnostic support, enterprises integrating AI APIs for vision, speech, or generative capabilities, and government departments employing AI for policing, fraud detection, or welfare administration. AI Deployers are accountable for the real-world impact of these systems on individuals, which includes ensuring fairness in outcomes, maintaining transparency in AI-assisted decisions, establishing accessible grievance redressal mechanisms, and implementing safeguards for safe and responsible use.
- Broad Implication: Almost Every Company Now Qualifies as an AI Deployer
Given the widespread adoption of AI-powered tools, ranging from chatbots and analytics engines to fraud detection systems, recommendation algorithms, and automation software, most businesses in India today fall within the definition of an AI Deployer. This broad applicability brings an extensive range of industries under the governance framework, including logistics, manufacturing, fintech, retail, telecom, education, consulting, healthcare, SaaS providers, IT/ITeS companies, digital marketing firms, and human resources or staffing enterprises. As a result, the inclusion of deployers has significantly expanded the scope of AI governance, extending it far beyond the traditional technology sector and effectively encompassing the majority of modern businesses that rely on AI in any form.
- Impact of MeitY's India AI Governance Guidelines (2025)
The Guidelines aim to create a safe, trustworthy AI ecosystem while nurturing innovation. Although voluntary, they set expectations that regulators, courts, and industry bodies will increasingly treat as mandatory norms.
- A Shift to Principle-Based, Multi-Layered Governance
Instead of enacting a single, comprehensive AI specific law, the Government has opted for a lightweight and adaptive governance model that encourages businesses to rely on existing statutory frameworks such as the Information Technology Act, the Digital Personal Data Protection Act, the Copyright Act, the Consumer Protection Act, and various sector-specific regulations. As a result, AI companies are now required to navigate and comply with this multi-layered legal environment by conducting impact assessments, maintaining detailed documentation of their AI systems, implementing transparency measures for users, establishing internal governance structures, managing risks related to bias and fairness, and providing accessible grievance redressal mechanisms for individuals affected by AI related harms. This approach places responsibility on companies to operationalise responsible AI practices within the contours of India's existing legal landscape.
- Bringing AI Deployers Under the Governance Net
The most significant structural shift introduced by the Guidelines is the explicit inclusion of AI Deployers within the governance framework. MeitY now requires deployers not only to use AI responsibly but also to assume active accountability for how AI systems impact individuals and society. This includes obligations to publish transparency reports, notify users whenever AI influences or determines a decision, maintain human oversight for consequential or high stakes outcomes, implement accessible grievance redressal mechanisms, and ensure fairness, non-discrimination, and overall safety in AI assisted processes. Deployers must also continuously monitor AI outputs to prevent harmful or unintended results and adopt sector specific safeguards applicable to regulated industries such as finance, healthcare, and telecom. By placing these duties on deployers, the Guidelines effectively turn them into "last-mile gatekeepers," significantly broadening the scope of compliance and extending AI governance obligations far beyond traditional technology providers.
- Additional Compliance Burden on Industry
For many businesses, the use of AI was previously viewed as a purely technological implementation matter, focused on operational efficiency rather than regulatory exposure. However, under the new Guidelines, AI deployment has evolved into a compliance, governance, and accountability obligation that demands far more structured oversight. Companies are now expected to establish AI risk committees, adhere to rigorous documentation standards, implement robust security controls, conduct periodic monitoring of AI systems, maintain clear escalation paths for addressing risks or failures, and undertake regular internal audits. Importantly, these requirements are not limited to large corporations, even small and mid size enterprises must now justify their use of AI, ensure fairness in algorithmic outcomes, and maintain comprehensive records demonstrating responsible model behaviour.
- Introduction of Techno Legal Obligations
The Guidelines also usher in a new category of techno-legal obligations, requiring organisations to integrate advanced technical safeguards into their AI systems to ensure compliance, accountability, and safety. Companies are now expected to adopt measures such as privacy-enhancing technologies (PETs), conduct regular algorithmic audits, implement robust bias-testing frameworks, and deploy watermarking and traceability features to track AI generated content. Additionally, they must maintain the ability to execute model unlearning mechanisms when required and preserve detailed dataset provenance logs to demonstrate the legality and integrity of training data. Collectively, these measures significantly elevate both the cost and complexity of AI deployment, compelling businesses to invest in specialised tools, skilled personnel, and ongoing governance processes.
- How AI Governance Has Become More Complex in India
AI governance in India has become increasingly complex due to the absence of a unified, dedicated AI statute. Instead of operating under a single regulatory framework, AI companies and deployers must navigate a wide and fragmented array of existing legal regimes, each governing different aspects of AI development and deployment. These include criminal law provisions that address offences such as impersonation and cyber fraud, consumer protection laws targeting misleading claims or unfair trade practices, privacy laws governing personal data use and consent, and intellectual property laws that regulate the use of copyrighted training material. Additionally, employment and anti-discrimination laws apply to AI systems used in hiring or workforce management, while financial regulatory laws cover AI tools used in banking, securities, and digital lending. Child protection statutes govern harmful or explicit AI generated content involving minors, and a variety of sector-specific regulations, from telecom to healthcare, impose further requirements based on industry context. Digital content rules also come into play when AI systems generate, curate, or moderate online material. Together, this mosaic of overlapping obligations creates a multi-layered compliance landscape that AI entities must carefully navigate.
- Fragmented, Multi-Statute Compliance Environment
AI companies must map obligations across numerous statutes, each carrying different compliance burdens. The absence of a single AI statute means entities must continuously interpret how outdated or broad laws apply to emerging AI risks.
- Broader Scope = Broader Liability
By expanding the definition of regulated entities to explicitly include AI Deployers, the Guidelines significantly broaden the scope of accountability within the AI ecosystem. This expansion determines not only who is responsible for AI driven decisions but also who must prepare and publish transparency reports, maintain detailed AI impact documentation, and comply with DPDP Act standards, even in situations where the company is not directly handling personal data but is relying on AI systems that may process or infer such information. As a result, a far wider range of businesses now faces heightened liability exposure, with obligations that extend well beyond the confines of traditional technology providers and into industries that merely integrate or rely on AI tools in their daily operations.
- Dispersed Enforcement Authority
AI governance in India is further complicated by the dispersed nature of enforcement authority, with oversight spread across multiple regulatory bodies depending on the type of harm or violation involved. Enforcement may originate from MeitY for digital governance matters, CERT-In for cybersecurity incidents, or the Data Protection Board of India (DPBI) for breaches under the DPDP Act. Financial sector use cases may attract scrutiny from SEBI, RBI, or IRDAI, while telecom related applications fall under the jurisdiction of TRAI. Consumer facing harms may lead to action by the Consumer Protection Authority, while criminal offences such as impersonation, fraud, or explicit content generation can trigger investigations by the police under the Bharatiya Nyaya Sanhita. Ultimately, disputes may escalate to the courts for judicial resolution. This fragmented enforcement landscape requires AI entities to maintain sector specific compliance vigilance and prepare for oversight from multiple regulators simultaneously.
- Conflicting Requirements Across Sectors
AI governance in India is additionally complicated by the conflicting and overlapping requirements that arise across different regulated sectors, each of which imposes its own set of obligations that often go well beyond the general expectations outlined in MeitY's AI Guidelines. When AI is deployed in highly sensitive domains such as finance, healthcare, education or telecom, companies must comply not only with broad principles of fairness, transparency and safety but also with stringent sector specific rules that impose heightened duties of care. For instance, financial institutions using AI for credit scoring, underwriting or algorithmic trading must adhere to detailed supervisory frameworks issued by SEBI and RBI, including model validation, audit trails, and algorithmic reporting. In healthcare, AI driven diagnostic tools may be classified as medical devices, requiring compliance with clinical validation, safety testing and possibly licensing under the Medical Devices Rules. In the education sector, AI systems interacting with minors must align with child protection standards, consent requirements, and age appropriate design norms. Similarly, telecom operators using AI for spam detection, caller identification or network management must implement safeguards mandated by TRAI, which may conflict with or exceed general AI governance guidelines. These sector based rules often impose more stringent operational, documentation and oversight requirements than the general AI framework, creating a layered and sometimes contradictory compliance environment that companies must navigate carefully.
- Statutes and Laws AI Companies and Deployers Must Comply With
Indian AI governance relies on existing legal frameworks. The following statutes regulate various aspects of AI operation:
- Information Technology Act, 2000
The Information Technology Act, 2000 serves as one of the primary statutes governing AI related activities in India, as it regulates a wide range of digital and cyber operations that intersect directly with the functioning of AI systems. The Act covers cyber offences arising from misuse of AI tools, including hacking, unauthorised access, and digital fraud. It also prohibits the creation, transmission, or publication of obscene or harmful content, an issue that has become increasingly significant with the rise of AI generated imagery and videos. The Act addresses impersonation offences, including the use of deepfakes and AI generated synthetic identities to deceive or harm individuals. Additionally, it imposes obligations relating to the protection of personal information, thereby governing privacy violations that may arise when AI systems process or infer sensitive personal data without lawful authorisation. Further, it establishes liability for negligence in maintaining data security, making organisations accountable if an AI system's vulnerabilities result in data breaches or exposure of confidential information. Additionally, once the offence fall under the IT Act, the Investigation Officer must be either Inspector rank or above and offence is non bailable.
- Bharatiya Nyaya Sanhita (BNS), 2023
The Bharatiya Nyaya Sanhita (BNS), 2023 plays a critical role in regulating AI related misconduct by addressing a wide spectrum of criminal offences that can arise from the misuse of artificial intelligence tools. It covers identity misuse, which includes the creation or manipulation of digital identities through deepfakes, synthetic media or AI generated impersonation intended to deceive or harm individuals. The statute also extends to cheating and fraud facilitated through AI systems, such as automated scams, manipulated communications or algorithm driven deception. Additionally, the BNS governs reputational harm and defamation, recognising that AI generated content, whether text, audio, or video, can be used to malign, distort, or damage a person's reputation. Importantly, it criminalises the creation, distribution or possession of obscene or synthetic imagery involving minors, ensuring that AI generated child sexual abuse material is treated with the same severity as real world exploitation.
- POCSO Act, 2012
The POCSO Act, 2012 establishes stringent protections for minors against all forms of sexual exploitation and its scope extends fully to offences facilitated through artificial intelligence. Even when sexually explicit images or videos involving minors are generated synthetically by AI, without the involvement of a real child, the Act treats such material as child sexual abuse content (CSAM) and criminalises its creation, possession, transmission, or distribution. This ensures that AI generated CSAM is prosecuted with the same severity as real world offences, recognising that synthetic content can perpetuate sexualisation of children, fuel harmful behaviour and contribute to broader exploitation. As AI tools become increasingly capable of creating realistic deepfakes and synthetic media, the POCSO Act serves as a crucial safeguard to prevent misuse and to hold individuals and platforms accountable for enabling or hosting such content.
- DPDP Act, 2023
The Digital Personal Data Protection (DPDP) Act, 2023 forms the cornerstone of India's modern data protection regime and has significant implications for both AI developers and AI deployers. The Act regulates the processing of personal data used in training AI models by mandating clear and valid consent from individuals whose data is utilised, while also requiring organisations to respect user rights such as correction and erasure. It imposes strict obligations to report personal-data breaches within prescribed timelines and reinforces the principle of data minimisation, ensuring that AI systems are trained or operated only on data strictly necessary for their intended purpose. Further, the Act mandates transparent notices that inform individuals about when and how their personal data is being processed, whether for model training or AI-driven decision-making. Importantly, the DPDP Act applies not only to developers who handle large datasets for training purposes but also to deployers whose AI systems make decisions or generate outputs based on personal data, placing a dual compliance burden on the entire AI value chain.
- Copyright Act, 1957
The Copyright Act, 1957 plays a central role in regulating how AI systems interact with protected creative works, imposing important obligations on both developers and deployers. The Act applies directly to the use of copyrighted material within training datasets, meaning that AI companies must ensure they have obtained appropriate licences or permissions before using books, images, music, videos, software code or other protected content to train their models. It also governs the outputs of AI systems, prohibiting the unauthorised reproduction or substantial imitation of copyrighted works in generated content. This means that if an AI model produces text, images or audio that closely resembles or replicates a copyrighted work without permission, both the developer and the deployer may face claims of infringement. As generative AI becomes increasingly capable of producing sophisticated and realistic content, compliance with the Copyright Act has become an essential aspect of responsible AI governance in India.
- Consumer Protection Act, 2019
The Consumer Protection Act, 2019 introduces important safeguards for individuals interacting with AI driven products and services, imposing clear obligations on businesses to ensure fairness and transparency in their commercial practices. The Act prohibits companies from making misleading or exaggerated claims about the accuracy, reliability, or performance of their AI systems, an issue that has become increasingly relevant as businesses market AI tools with promises of near perfect precision or automation. It also bars unfair trade practices, including the use of AI to manipulate consumer choices, conceal material information or create deceptive user experiences. Furthermore, the Act requires companies to disclose when AI is involved in making or influencing decisions that affect consumers, hidden or undisclosed AI driven decision making is prohibited, as it deprives users of informed consent and undermines transparency. Together, these provisions ensure that AI driven consumer interactions remain honest, fair, and accountable, placing responsibility on businesses to clearly communicate how AI is used and to avoid practices that could mislead or disadvantage consumers.
- Anti-Discrimination & Labour Laws
India's anti-discrimination and labour laws impose critical obligations on organisations that use AI in employment related functions, ensuring that automated systems do not perpetuate bias or result in discriminatory outcomes. Key statutes in this framework include the Rights of Persons with Disabilities Act, 2016, which mandates equal opportunity and prohibits discriminatory exclusion of persons with disabilities; the Transgender Persons (Protection of Rights) Act, 2019, which prevents unfair treatment or denial of employment on the basis of gender identity and the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act, 1989, which criminalises discriminatory practices or acts that adversely affect members of SC/ST communities. Additionally, the Code on Wages, 2019 requires fairness and transparency in wage-setting practices, which applies equally when AI driven tools are used to determine compensation or salary bands. Collectively, these laws apply to all AI based hiring, promotion assessments, performance evaluations, wage decisions and workforce automation systems. Organisations deploying AI in their HR processes must therefore ensure that such systems are free from discriminatory bias, are explainable and are subject to human oversight to avoid unlawful or prejudicial outcomes.
- SEBI Act, Banking Regulation Act, and RBI Guidelines
The SEBI Act, the Banking Regulation Act, and various RBI guidelines collectively form the regulatory backbone for the use of AI within India's financial services sector imposing stringent controls on how algorithms and automated systems may be deployed. These laws govern AI applications across several critical functions, including algorithmic trading, where firms must ensure transparency, auditability and safeguards against market manipulation, credit scoring, where AI driven assessments must be fair, explainable and free from discriminatory bias and underwriting, which requires accurate, accountable, and compliant risk evaluations. The regulatory framework also extends to fraud detection systems, mandating accuracy, security and oversight to prevent wrongful flagging or operational errors. Similarly, AI driven risk management tools must align with supervisory expectations around governance, reliability, and model validation. Together, these statutes and guidelines ensure that the adoption of AI in financial services maintains market integrity, consumer protection, and systemic stability.
- Sector-Specific Laws
Across various industries, a range of sector-specific laws further regulates the use of AI, imposing obligations tailored to the risks and sensitivities of each domain. In the healthcare sector, AI tools that perform diagnostic or clinical functions may be classified as Software as a Medical Device (SaMD) under the Medical Devices Rules, requiring regulatory approval, safety validation and quality assurance before deployment. In the telecom sector, the use of AI for spam detection, caller identification, network management or content filtering must align with TRAI guidelines, which emphasise transparency, accuracy, and network integrity. For insurance companies, IRDAI regulations apply to AI driven claim processing, underwriting automation and risk assessment tools, ensuring fairness, accountability, and compliance with policyholder protection norms. In the education sector, AI powered learning platforms and tutoring systems must comply with child safety and age appropriate design standards to safeguard minors.
- The Expanded Compliance Universe After Including AI Deployers
The inclusion of AI Deployers within the regulatory framework has dramatically expanded the scope of India's AI governance landscape, extending compliance obligations far beyond traditional technology companies. Under this broadened definition, even entities that merely use AI tools, rather than develop them, are now covered. This includes companies that rely on simple AI APIs for automation or workflows, enterprises using AI driven analytics to inform business decisions and consumer facing businesses deploying AI chatbots for customer service. It also encompasses HR departments using automated screening or assessment tools, SaaS platforms integrating AI features into their products and government departments employing AI for public facing services such as fraud detection, welfare distribution or administrative functions. As a result, AI governance in India has effectively transitioned from a niche regulatory concern applicable only to specialised AI developers into a whole economy compliance regime, where virtually any organisation leveraging AI in any form falls under the ambit of the Guidelines and associated legal obligations.
Conclusion
MeitY's 2025 AI Governance Guidelines mark a foundational shift in India's regulatory architecture. By imposing governance responsibilities on both AI Companies and AI Deployers, the Government has created a broad, principle based compliance ecosystem. While the Guidelines promote responsible innovation, they also impose a significant compliance burden on industry. AI entities must now navigate multiple overlapping laws, sectoral regulations and techno legal obligations, often without a single codified AI statute for clarity.
In effect, India now operates a multilayered AI governance model, where responsibility is distributed across developers, deployers, and regulators alike. As AI continues to scale across sectors, companies will need robust internal governance systems, legal compliance frameworks, and technical safeguards to remain compliant in a rapidly evolving landscape.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.