ARTICLE
16 October 2024

AI And Technology Newsletter | September 2024

DL
Dentons Link Legal

Contributor

Established in 1999, Dentons Link Legal is a full service corporate and commercial law firm with over 40 partners and 150 lawyers across multiple practice areas. With offices across all major Indian cities and access to more than 200 offices in more than 80 countries of Dentons’ combination firms across the world, Dentons Link Legal is equipped to assist you in achieving your business objectives with the help of a team of experienced, well trained and qualified lawyers. The Firm’s clientele includes some of India’s leading corporate groups, public sector undertakings, public sector and private banks, private individuals, and multinational corporations across the world.
The evolving legal landscape in India continues to respond to emerging challenges and technological developments across various industries.
India Technology

The evolving legal landscape in India continues to respond to emerging challenges and technological developments across various industries. Recent actions by regulatory bodies and judicial pronouncements highlight the importance of safeguarding rights and ensuring accountability in an increasingly digital environment. This section provides an overview of the latest industry updates, including the Bombay High Court's stance on impersonation of judges, MeitY's advisories concerning AI and intermediary obligations, and advancements in AI-based traffic solutions. Furthermore, it touches upon global regulatory trends such as the EU AI Act, the UAE's AI Charter, and the US COPIED Act, illustrating a global effort to balance innovation with ethical considerations and public safety.

Industry updates:

Industry Updates - India:

1. Bombay High Court takes cognizance of impersonation of judges for unlawful financial gains.

September 10, 2024: The Registrar General of the Bombay High Court issued a notice taking cognizance of the increasing menace of fake calls and messages impersonating judges and officers of the Hon'ble Court. Modern technologies have made it easier for impersonators to conduct such activities seamlessly and mirror the judges' personal attributes far more accurately than before. Vide this Notice, the High Court has urged individuals receiving such messages and calls to proactively report the same to the relevant police stations having jurisdiction, and to inform the Nodal Officer Shri Rajendra T. Virkar expeditiously, for action to be taken under the law. This marks a significant step forward from the judiciary to counter cyber-crimes and give precedence to personality rights. Further, it highlights the far reaching ramifications of such violations upon the security and integrity of the nation.

Link Here

2. The Ministry of Electronics and Information Technology ("MeitY") issued a series of advisories on the due diligence obligations of intermediaries under the Information Technology Act, 2000 ("IT Act") read with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ("IT Rules") in respect of content generated using AI, being hosted by them

December 26, 2023: MeitY issued an advisory re-iterating the obligations of intermediaries/platforms to monitor content in contravention of the provisions of the IT Rules being generated and published on their platforms. MeitY went a step further to state via this Advisory that in furtherance to the obligation of intermediaries to report violations elucidated in Rule 3(1)(b) of the IT Rules, they must enable users to also proactively report such violations through the app itself or via a link to an email id and/or contact number of the Grievance Officer appointed under the IT Rules.

Link Here

March 1, 2024: In continuation of the December 2023 Advisory, the MeitY issued an advisory, imposing additional obligations on intermediaries to obtain statutory permissions prior to deployment of under-trial AI models and to ensure that users are informed about the inherent fallibility of the output generated by such models through "consent pop-ups". MeitY made a specific reference to the Lok Sabha elections and emphasized that intermediaries must ensure that their computer resources are not utilised to propagate bias or discrimination or threaten the integrity of the electoral process by any means, including by the use of AI models, LLMs, GenAI apps, software(s) or algorithm(s)1. Further, this Advisory made it mandatory for intermediaries to label any generative content that could potentially be used as misinformation or a deepfake to enable the identification of the first originator (amongst other parties).

Link Here

March 15, 2024: The MeitY issued another advisory clarifying the ambiguities of the March 1, 2024 Advisory, the key highlights of which are summarised hereinbelow: -

i. All intermediaries or platforms must ensure that AI models, LLMs, GenAI apps, software(s) or algorithm(s) do not permit their users to host, display, upload, modify, publish, transmit, store, update or share any content that is in violation of Rule 3(1)(b) of the IT Rules, or any other provision of the IT Act or any other laws in force in comparison to the March 1, 2024 Advisory under which only a violation of the IT Act was punishable. Here, it is important to highlight that while the IT Rules already grant safe harbour protection to intermediaries contingent upon their adherence to due diligence obligations in compliance with prevailing laws, this Advisory expands the scope of responsibility for intermediaries from conducting general due diligence to specifically ensuring compliance with respect to the deployment of AI.

ii. The mandate under the March 1, 2024 Advisory to obtain explicit statutory permission from the MeiTY, for deploying under-trial AI models, has been overruled. Instead, the advisory upholds the prescription under the March 1, 2024 Advisory to communicate the potential risk of inaccuracy in the output of such products via "consent pop-ups" as sufficient for the said purposes.

iii. The mandate of the March 1, 2024 Advisory to label content to identify the first originator has also been over-ruled in this Advisory.

iv. This Advisory further overrules the requirement under the March 1, 2024 Advisory to submit an action taken-cum-status report to MeitY within a prescribed time period. The aforementioned advisories issued in March, 2024 were against the backdrop of noted negligence of intermediaries and platforms in performing their obligations of conducting due diligence under the IT Act and IT Rules, more so, in light of the growing concerns around lack of transparency and accountability in the usage of AI.

Link Here

September 03, 2024: In continuation of the aforementioned advisories, further fuelled by the Bombay High Court Order in National Stock Exchange of India Ltd. vs. Meta Platforms, Inc. & Ors2. , in which intermediaries were directed to delete or disable the fake information, including morphed videos and profiles circulating on their platform promptly within ten (10) hours of receiving such complaint, MeitY, vide this Advisory, emphasised on the need for intermediaries to comply with take-down orders from entities like the National Stock Exchange and take prompt action in the removal of the subject content from their platforms, which could otherwise lead to irreparable harm to individuals that fall prey to cyber frauds and scams.

Link Here

3. The Ministry of Corporate Affairs ("MCA") invites comments on the Report and the Draft Digital Competition Bill

March 12, 2024: The Ministry of Corporate Affairs invited public comments on the Report of the Committee on Digital Competition Law ("CDCL") and the Draft Bill on Digital Competition Law, 2024 ("Draft Digital Competition Bill"). The Draft Digital Competition Bill comes in the wake of the digital revolution which prompted the CDCL to review the existing provisions of the Competition Act, 2002. The CDCL made certain recommendations, highlighting a need for a separate legislation - a new Digital Competition Act, allowing the Competition Commission of India ("CCI") to "selectively" regulate large digital entities on an ex-ante basis.

Key features of the Draft Digital Competition Bill include:

  • Applicability - The Draft Digital Competition Bill is proposed to apply to enterprises designated as Systemically Significant Digital Enterprise ("SSDEs"), which render Core Digital Service in India – services that are so enlisted in the Bill including online search engines, social networking services, operating systems, web browsers, cloud services etc
  • Ex-ante regulation - The CCI is proposed to have the right to take proactive action even in anticipation of an anti-competitive conduct or event before the consequences become irremediable.
  • Classification of enterprises – The Bill classifies enterprises to which it applies - SSDEs and Associate Digital Enterprise ("ADEs"). SSDEs are enterprises that have a significant presence in the provision of a Core Digital Service in India with the ability to influence the Indian digital market. An enterprise is classified as an SSDE if it fulfils the dual tests of "significant financial strength" (assessing the market addressed in terms of value) and "significant spread" (assessing the market addressed in terms of volume) to identify market dominance. Irrespective of the fulfilment of these tests, the CCI retains the power to classify an enterprise as an SSDE basis the industry barriers it imposes to entry or expansion. An ADE on the other hand, refers to an enterprise which is a subsidiary or part of a conglomerate SSDE.
  • Obligations – The Bill imposes obligations on SSDEs to ensure fair competition and not abuse their dominant position by way of discriminatory pricing, unfair terms of service, exclusivity arrangements or creating barriers to innovation and market entry. Further it is proposed that SSDEs shall be required to maintain transparency in the process of collection of personal data from users and take necessary consents to cross-use such data for purposes other than those for which it is collected or to share the same with third parties.
  • Penalties – The classification of the enterprise as an SSDE is not optional. If the enterprise fails to notify the CCI within ninety (90) days that it qualifies as an SSDE, the CCI may impose penalties up to one (1%) percent of the global turnover of such enterprise. Further, submission of incorrect, incomplete or misleading information by the enterprise in this regard shall also invoke the same penalty. The designation, once granted, subsists for a period of 3 (three) years but may be waived by the CCI pursuant to an application from the designated enterprise, if the CCI is convinced that the enterprise no longer qualifies as an SSDE.

Link Here

Link Here

4. MeitY amends Regulations on deletion of Surveillance data

February 26, 2024: MeitY, via Notification No. G.S.R. 133(E) has amended the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009. By virtue of which, not only security agencies but also the Union and State home secretaries have been directed to delete surveillance data after a period of six (6) months.

Link Here

5. Maharashtra's First AI Traffic Solution Launches on Pune-Mumbai Expressway

July 19, 2024: The Pune-Mumbai Expressway has introduced Maharashtra's first AI-powered Intelligent Traffic Management System ("ITMS") which uses data analytics to optimise road safety and seamless vehicle movement and is monitored by the Regional Transport Officer from Lonavala. The ITMS is powered by 200+ AI-enabled cameras along the 95-kilometer expressway and can automatically detect traffic violations, recognize vehicle number plates, and issue e-challans. This initiative is indicative of the positive sentiment of the Government to deploy AI in order to augment work and enhance safety.

Link Here

Industry Updated-Global:

1. EU leads in the release of its new Artificial Intelligence Act

June 13, 2024: The European Union ("EU") AI Act, 2024 has been brought into force with a progressive enforcement over a period of next two (2) years. Listed below are the key features of the legislation:

  • Enforceability: The EU AI Act becomes enforceable progressively over a period of time:
  1. Provisions on prohibited practices will be enforceable from February 2, 2025.
  2. Provisions pertaining to notification authorities, governance of AI models, confidentiality etc. would come into force from August 2, 2025, excluding the provisions relating to fines for providers of General-Purpose AI Models ("GPAI") which would become enforceable only on August 2, 2026.
  3. The remaining provisions of the EU AI Act will come into effect on August 2, 2026, with an exception to the provision relating to the classification rules for High-Risk AI Systems, which will become effective on August 2, 2027.
  • Risk based approach: The EU AI Act, 2024 adopts a risk-based approach and categorises AI applications on the basis of four levels of risk and deterrent penalties for violation thereof:
  1. 'Prohibited AI systems' which manipulate human decision making or exploit their vulnerabilities or result in social discrimination or violate fundamental rights including the right to privacy. However, exceptions are made for systems used for law enforcement and security purposes. Non-compliance with this provision would invoke a fine of EUR 35,000,000 or 7% of the global annual turnover for the preceding financial year, whichever is higher.
  2. 'High risk AI systems' being standalone products or components of products which could adversely affect the health and safety of individuals, fundamental rights, or the environment (especially if misused or in case of a product defect), and therefore trigger stringent obligations on developers and deployers thereof. Non-Compliance with such obligations shall invoke a fine of up to EUR 15,000,000 or 3% of the global annual turnover for the preceding financial year, whichever is higher.
  3. 'Moderate Risk AI systems' which run a risk of being misused to manipulate or deceive individuals and hence required to perform specified transparency obligations. Failure to adhere to such obligations shall invoke a fine of up to EUR 15,000,000 or 3% of the global annual turnover for the preceding financial year, whichever is higher.
  4. 'Low Risk AI systems' which do not fall under the any of the above-mentioned categories and therefore do not carry any obligations except for recommendations to follow general principles such as avoidance of human oversight, non-discrimination, and fairness. It is necessary to note that in case of supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities, a fine up to EUR 7,500,000 or 1 % of the global annual turnover for the preceding financial year, whichever is higher, may be imposed even for low-risk AI systems.

Link Here

2. UAE launches Charter for the development and use of AI

July 30, 2024: The UAE's AI, Digital Economy, and Remote Work Applications Office launched this Charter to provide a guiding framework to promote ethical and safe development and deployment of AI laying emphasis on increased awareness and legal compliance. While the Charter is not legally enforceable, it indicates a proactive stance of the Government on compliance with ethical principles in deployment of AI including ensuring safety and transparency of AI systems, accountability in terms of ensuring privacy of personal and confidential data (since data forms the bedrock of an AI model) or prevention of algorithmic bias in deliverance of the output so that the systems are inclusive and accessible.

Link Here

3. US Senate introduced the COPIED Act – a new US Bill for deepfakes

July 11, 2024: The new "Content Origin Protection and Integrity from Edited and Deepfaked Media" Bill ("COPIED Act") in the US proposes measures to improve transparency and accountability in respect of AI generated content. Listed below are the key highlights of the COPIED Act:-

  • Definition of Deepfakes: While "deepfakes" has now become a commonly used term, the COPIED Act has sought to define the term 'deepfake' as synthetic content or synthetically-modified content that— appears authentic to a reasonable person and creates a false understanding or impression. Synthetic content, has in turn, been defined to mean information, including works of human authorship such as images, videos, audio clips and text, that has been wholly generated by algorithm(s), including by AI. It is pertinent to note that unlike in general parlance, this Bill does not restrict the definition of 'deepfakes' to morphed images of individuals alone but encompasses all works of human authorship.
  • Obligations – Some of the key obligations proposed by the Bill for developers and deployers of AI systems are as follows:-
  1. Provide users an option to tag content generated through AI with its origin to ensure their authentication and detection.
  2. Where the AI systems or applications are used to generate synthetic content or covered content, users must be given an option to attach content provenance information within two (2) years from the date of enactment of the Act.
  3. Prohibits removing, altering, tampering with, or disabling content provenance information, with a limited exception for security research purposes.
  • The Bill proposes to extend the Federal Trade Commission's ("FTC") enforcement powers, in respect of unfair and deceptive practices, to cover violations under this Act.

Link Here

4. Singapore publishes its Model Governance Framework for Generative AI

May 30, 2024: AI Verify Foundation3 and Infocomm Media Development Authority of Singapore have released its Model AI Governance Framework for Generative AI ("Singapore GenAI Framework") which is primarily a voluntary framework. It is designed to guide businesses and organizations in the responsible development and deployment of generative AI technologies. While it is not an enforceable regulation, it provides a framework of best practices in respect of development and deployment of AI and in many ways, mirrors the UAE Charter mentioned above. It lays considerable emphasis on accountability, safety and transparency of AI systems coupled with adequate and regular testing, ensuring the adoption of Privacy Enhancing Technologies (PETs) (which are a set of tools that help protect user data while still allowing for data collection, analysis, and sharing such as end-to-end encryption); prompt reporting of incidents breaching ethical standards; standards for content provenance detection and transparency; and democratised access.

Link Here

5. Colorado's new Artificial Intelligence Regulation

May 17, 2024: The Governor of Colorado has approved a law concerning Consumer Protection for Artificial Intelligence which shall be incorporated in Colorado Revised Statutes. It bears a similarity to the EU Act in the context of taking a risk-based approach and for its focus on high-risk AI systems. For example, the requirement of developing a risk management system is common to both the acts This act is set to take effect from February 1st, 2026. Key features of the act are:

  • Definition of algorithmic discrimination: here the act defines 'algorithmic discrimination' as any condition where the use of AI results in unlawful treatment of an individual or group on the basis of their age, disability, ethnicity, etc.
  • Obligations: following are the key obligations imposed on the developers and deployers of high-risk AI systems:-
    • Detailed documentation must be maintained describing the purpose, intended uses, the types of training data used, and any known or reasonably foreseeable limitations.
    • Impact assessments shall be conducted to evaluate potential effects of AI systems, especially concerning discrimination and fairness.

Link Here

6. Balancing Google's Net Zero Goals amidst its AI integration

July 06, 2024: In the run up to net-zero emissions goal by 2030, Google's recent environmental report reveals a sharp increase in greenhouse gas emissions which it attributes to the increase in data centre energy consumption and supply chain emissions. Google has issued a statement to the effect that further integration of AI into its products may be challenging due to the increased "energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment." 4 This raises concern on the future of AI and the balance with environmental goals.

Link Here

Legal updates/ judgments:

a. India

Even though personality rights are not explicitly protected under the Indian law. In the recent past, various Courts have taken proactive steps to recognise the value of personality rights as monetizable goodwill of reputed figures.

1. Neela Film Productions Pvt. Ltd. V. TaarakMehtakaOoltahChashmah.com & Ors. (CS (COMM) 690/2024)

August 14, 2024: The Delhi High Court, in a recent decision, recognized the personality rights of the characters from the show 'Taarak Mehta Ka Ooltah Chashmah' ("Show") by granting an ad interim injunction. The Court restrained the defendant from utilizing an "unknown face distortion technology" to publish videos or images incorporating deepfakes that impersonate the characters of the Show. Additionally, the Court observed that the defendants had been hosting, streaming, and selling goods or services that infringed upon the plaintiff's copyright. This is an interesting case highlighting the intersection between copyright protection and the enforcement of personality rights.

Link Here

2. Arijit Singh v. Codible Ventures LLP and Ors. (Com IPR Suit (L)/ 23443/ 2024)

July 26, 2024: In another matter pertaining to violation of personality rights, the Bombay High Court provided ad interim relief by restraining AI platforms from cloning the voice of renowned musician Arijit Singh ("Plaintiff") without his consent. The high vulnerability of performers to unauthorised exploitation by generative AI and AI platforms, which capitalize on the performer's personality rights, was recognised by the Court. While the Court did not prescribe deletion of the entire video, the AI platforms were directed to remove or delete the references to the singer's personality traits, voice, image etc from their videos. The next date of hearing is September 27, 2024.

Link Here

3. Jaikishan Kakubhai Saraf v. Peppy Store, (2024 SCC OnLine Del 3664)

May 15, 2024: An ex-parte injunction was passed by the Delhi High Court against the entities disseminating distorted videos using GenAI tools as well as a company operating an unlicensed chatbot impersonating the reputed Actor Jackie Shroff ("Plaintiff"). The Court imposed a restriction on the usage of name and other sobriquets of the Actor, that tarnish his reputation and violates his moral rights for any commercial use without his consent and authorisation. The Court further directed the Department of Telecommunications and MeitY to issue directions necessary for telecom/ internet service providers to block the infringing links.

Link Here

b. Global

1. Canadian Federal Court receives its first AI copyright violation case

July 8, 2024: The Federal Court of Canada, Ottawa was first time faced with the fundamental question of whether AI generated image or artwork are entitled to protection under the Copyright Act. The copyright registration of the image "Suryast" which was generated by an AI tool called RAGHAV AI was challenged on the grounds that the image lacked originality and an AI (non-human) cannot be an author under the Canadian Copyright Act.

Link Here

2. Microsoft- owned OpenAI has been charged with suit claiming illegitimate scraping- highly controversial

July 01, 2024: A recent class-action lawsuit was initiated against Microsoft and OpenAI, alleging the unlawful use of web scraping and copyright violations in the training of their AI models, such as ChatGPT. The plaintiffs claimed that the companies utilized publicly available content without proper authorization for training their AI prototypes, including ChatGPT. The judgement is awaited and would be a landmark step pertaining to data scraping with regards to training of AI models.

Link Here

3. US Record Labels Sue Suno and Udio Over Copyright Infringement in AI Music Generation

June 24, 2024: Lawsuits have been filed by Recording Industry Association of America ("RIAA") along with major record labels against AI music generators Suno and Udio, alleging widespread copyright infringement, potentially seeking damages of up to $150,000 per infringed work5. Notably, the lawsuits cite instances where AI-generated songs bear a close resemblance to iconic songs, such as Chuck Berry's "Johnny B. Goode" and Mariah Carey's "All I Want for Christmas Is You," thereby infringing on the copyrighted works.

Link Here

III. Fun facts:

1. September 11, 2024: OpenAI has launched a new 'advanced voice mode' feature resembling natural human conversation in response to the prompt. Concerns have been expressed this may lead to development of an intimate relationship between human and the chatbot.

Link Here

2. September 4, 2024: AI saved a life- Meta AI picked up a viral post by a woman on the verge of suicide and alerted the police while facilitating real-time location tracking.

Link Here

3. August 28, 2024: A model has been released by Disney Research that can mimic human facial movements. Additionally, experiments are being conducted with robots for films and theme parks, to avoid accidents caused during stunts.

Link Here

4. August 17, 2024: Sarvam AI launched its second version which promises to translate and summarize various Indian languages. The Sarvam 2B model is trained on synthetic data and can be used on WhatsApp as well as on traditional voice calls.

Link Here

5. July 24, 2024: Meta's Vision to create AI clones who would reflect a creator's personality and goals, could handle interactions. This will be beneficial for enhanced creator-fan interaction while allowing the creators to save time and focus on content creation.

Link Here

6. July 17, 2024: The App, called IRL AI, is an ongoing experiment that suggests date spots to meet up and handles the logistics, facilitating real-life interactions.

Link Here

7. July 10, 2024: Neuromarketing is an innovative approach, which involves studying of brain activity enabling marketers to identify and provoke emotional responses in consumers. This can be used by the brands to create content which captures attention, leading to higher engagement rates.

Link Here

8. July 06, 2024: YouTube has updated its AI powered Erase Song tool which now removes copyrighted music from videos without muting the rest, improving on the previous version that often-muted larger sections of audio.

Link Here

9. June 28, 2024: The Future of Smart Glasses in the Market. Meta's Ray-Ban Wayfarer glasses, featuring built-in cameras and AI voice commands provide users with seamless multimedia capabilities. In contrast, competitors like Xreal's AIR 2 Ultra focus on immersive experiences, offering expansive virtual screens for users.

Link Here

10. May 23, 2024: The Manipur High Court utilized Chat-GPT for research in the service law case of Mohd. Zakir Hussain v. State of Manipur, 2024. The Court, unable to obtain necessary information, resorted to Google and Chat-GPT 3.5 for extra research to obtain data to assist with arriving at a decision.

Link Here

IV Glossary:

a. AI – Artificial Intelligence, which refers to the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

b. AI model - A program that applies one or more algorithms to data to recognize patterns, make predictions, or make decisions without human intervention.

c. Generative AI - Generative AI (GenAI) is a type of Artificial Intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models.

d. LLM - Large language models that are a subset of AI models which use machine learning and can comprehend and generate text in natural human language.

e. Authentication - The process of verifying the identity of a user, process, or device, often as a prerequisite to allowing access to resources in an information system.

f. Encryption - The process of protecting information or data by using mathematical models to code it in such a way that only the parties who have the key to unscramble it can access it.

g. Content provenance- is the process of establishing the origin and authenticity of digital content, such as images, videos, audio recordings, and documents.

h. Cyberspace - The dynamic and virtual space that connects different computer systems.

Footnotes

1. Anoop Verma, "MeitY's AI Deployment Advisory: How IT cos Tailor their Approach to Align with Government Policy", March 11, 2024 - Economic Times.

2. Interim Application (L) No. 21456 of 2024 in COM IPR Suit (L) NO.21111 of 2024.

3. The foundation is a not for profit foundation and aims to promote best practices and standards for AI: AI Verify Foundation – AI Verify Foundation

4. Evgenia Gubina, "Google's greenhouse gas emissions have increased by almost 50% – all due to artificial intelligence", July 3, 2024- Mezha Media

5. "Music Labels Sue AI companies Suno, Udio for US Copyright Infringement", June 25, 2024- Economic Times.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Find out more and explore further thought leadership around Technology Law and Digital Law

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More