ARTICLE
25 September 2024

Information Technology

J
JSA

Contributor

JSA is a leading national law firm in India with over 600 professionals operating out of 7 offices located in: Ahmedabad, Bengaluru, Chennai, Gurugram, Hyderabad, Mumbai and New Delhi. Our practice is organised along service lines and sector specialisation that provides legal services to top Indian corporates, Fortune 500 companies, multinational banks and financial institutions, governmental and statutory authorities and multilateral and bilateral institutions.
MoF, vide notification dated February 28, 2024, applied certain provisions of the Banking Regulation Act, 1949 ("Banking Regulation Act"), with modifications, to financial products, financial services...
India Media, Telecoms, IT, Entertainment

Ministry of Electronics and Information Technology's advisory on deployment of AI models

On March 1, 2024, the Ministry of Electronics and Information Technology (",b>MeitY") issued an advisory ("Advisory") directing all intermediaries and platforms to label any under-trial/unreliable artificial intelligence ("AI") models, and to secure explicit prior approval from the government before deploying such models in India. This Advisory follows a strong response by MeitY considering the Google-Gemini row and also builds on an earlier advisory dated December 23, 2023 ("December Advisory") specifically targeting the growing concerns around propagated by AI Deepfakes and mandating communication of prohibited content to the users.

Provisions of the Advisory

The Advisory states that all intermediaries and platforms are to ensure that their AI-models, large language models (LLMs), generative AI/software(s) and algorithm(s) or computer resource does not permit any discrimination or threaten the integrity of the electoral process and to prohibit their users from contravening the provisions of the Information Technology ("AI") Act 2000 ("IT Act") and the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 ("Rules") which does not permit hosting, displaying, modifying, publishing transmitting, storing, updating or sharing unlawful content. Further, the Advisory also states that the use of under-testing/unreliable AI models of intermediaries and platforms on Indian internet is subject to prior explicit approval from the government and its deployment is dependent on a due diligence of the AI model's possible and inherent unreliability of the output generated which is to be informed to its users by way of a consent mechanism essentially containing the risks and consequences of dealing with such unlawful information. The Advisory has also requested AI platforms and intermediaries using its software or any other computer resource in such a manner to generate information that could be misused or considered deepfake to label such information with a unique metadata or identifier in a manner that such label, metadata or identifier can be used to identify that such information is generated by the AI system of the intermediary, identify the intermediary and the creator or first originator of such misinformation or deepfake. Moreover, this Advisory, very similar to the December Advisory, highlights the possibility of severe penal consequences to intermediaries, platforms and users in the event of non-compliance of the IT Act, Rules and criminal laws.

Clarifications on the Advisory

Further, the Minister for Electronics and Information Technology, Mr. Ashwini Vaishnaw and Minister of State, Mr Rajeev Chandrasekhar have confirmed that this Advisory is not binding and only encourages voluntary compliance to prevent legal action by consumers. Mr Rajeev Chandrasekharan has clarified that the Advisory is intended for significant /large platforms, not AI start-ups are required to seek prior approval from the government.

Conclusion

Although under Section 13 of the Rules, MeitY can issue appropriate guidance and advisory to publishers, it is unclear if MeitY is within its scope of the Rules to issue advisories specific to AI governance, thereby questioning its validity. The advisory, by its very nature, is not binding as held by a plethora of judgments of Indian Courts. The threshold for determining "significant/large platforms" and "startups" remains unclear. The parameters for evaluating "under-tested" and "unreliable" AI are not defined, thereby making voluntary compliance difficult.

Revised MeitY advisory on deployment of AI models

In light of the ambiguities arising in the Advisory, on March 15, 2024, MeitY issued a revised advisory on deployment of AI models ("Revised Advisory") which effectively replaces the Advisory without modifying the December Advisory. The Revised Advisory has done away with mandatory prior government approval, submission of action taken-cum status report, extended the scope of due diligence to all AI intermediaries and platform and retain certain requirements from the Advisory.

Provisions of the Revised Advisory

The Revised Advisory reinforces some requirements from the Advisory namely: (a) users need to be explicitly informed about the unreliability of the output by way of a "consent pop up" mechanism or any other equivalent mechanisms; (b) all intermediaries and platforms are required to inform the users about the ramifications of dealing with unlawful content; and (c) all intermediaries and platforms are required to utilise labels, metadata, or unique identifiers to identify content or information that is AI generated, modified, or created using synthetic information. The Revised Advisory also reiterates the importance of compliance with the IT Act and the Rules like the Advisory.

The Revised Advisory has introduced some changes, namely:

  1. seeking explicit prior permission from the Government for the deployment of any unreliable or under tested AI models is done away with. Instead, unreliable or untested AI models are to be made available to the users only after notifying them of the unreliability of the generated output.
  2. the Revised Advisory has eased the requirement for submission of an action cum status report to be submitted.
  3. the due diligence requirement extends to all intermediaries and platforms, including compliance requirements related to the use and deployment of AI tools by the intermediaries and platforms as opposed to "significant/large" platforms mentioned in the Advisory and the clarification issued thereafter;
  4. the scope of "unlawful content" that all intermediary and platform should ensure is not published / hosted / displayed / transmitted / stored / updated or shared extends beyond the Intermediary Guidelines and the IT Act and also encompasses content that is deemed unlawful under other laws in force;
  5. the Revised Advisory serves a reminder that the intermediaries, platform and its users may face penal consequences under criminal laws for noncompliance with IT Act and its rules;
  6. the labelling requirements in the Advisory to be followed by the intermediaries and platforms has extended to include identification of not just the first creator or the originator of misinformation or deepfake but also the user or computer resource that has caused any change or modification to such information.

Conclusion

Although the Revised Advisory is seen as a welcome change, the ambiguity around the legal provision basis which MeitY issued such advisories raises questions about its enforceability and binding value. Similar to the Advisory, the measure for determining what is 'unreliable' or 'under-tested' still remains unclear thereby making compliance difficult. Though the requirement of intermediaries and platforms to label AI models is carried forward from the Advisory to the Revised Advisory with some changes, there is no clarity on what the acceptable forms of labelling are to be followed by the intermediaries and platforms. Further, the Revised Advisory, concurrently, mentions that a "consent pop-up" may be used to inform the users about the unreliability of the output generated when, the purpose of a "consent pop-up" is to obtain consent from the users and not just intimating about the fallibility of the output generated.

Patents (Amendment) Rules, 2024

MoCI, vide notification dated March 15, 2024, issued the Patents (Amendment) Rules, 2024. The key amendments are as follows:

  1. the period within which an applicant must file the statement and undertaking regarding foreign applications is changed to 3 (three) months from the date of filing the application (earlier this was 6 (six) months);
  2. a patent applicant may file 1 (one) or more further applications under Section 16 of the Patents Act, 1970 ("Patent Act") including in respect of an invention disclosed in the provisional or complete specification or a further application filed under section 16 of the Patent Act;
  3. a request for examination under Section 11-B of the Patent Act must be made in Form 18 within 31 (thirty-one) months (earlier this was 48 (fortyeight) months) from the date of priority of the application or from the date of filing of the application, whichever is earlier; and
  4. Rule 70A dealing with provisions with respect to certificate of inventorship is inserted.

Further, vide notification dated March 16, 2024, MoCI issued the Patents (Second Amendment) Rules,2024, inserting Chapter XIVA dealing with provisions related to adjudication of penalties and appeals. Form 31 (Complaint for contravention or default of sections 120, 122 and 123 of the Patents Act, 1970) and Form 32 (Appeal against an order passed by the adjudicating officer), were also inserted.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More