ARTICLE
3 February 2026

Use Of AI In Schools: The Legal And Ethical Risks

E
ENS

Contributor

ENS is an independent law firm with over 200 years of experience. The firm has over 600 practitioners in 14 offices on the continent, in Ghana, Mauritius, Namibia, Rwanda, South Africa, Tanzania and Uganda.
As AI becomes embedded in classrooms, schools and universities face a pivotal choice: leverage its benefits for personalised learning, support and feedback, and improved access to information...
South Africa Technology
Isaivan Naidoo’s articles from ENS are most popular:
  • with Senior Company Executives, HR and Finance and Tax Executives
  • with readers working within the Metals & Mining industries

As AI becomes embedded in classrooms, schools and universities face a pivotal choice: leverage its benefits for personalised learning, support and feedback, and improved access to information, while managing legal and ethical risks such as academic integrity, data privacy, intellectual property, bias and transparency. In this article, we address some of the key legal and ethical considerations surrounding the use of AI within classrooms and provide practical guidelines for responsible use of AI tools.

Erosion of academic integrity

In the wake of the AI boom, students were quick to exploit generative AI tools to produce essays, homework answers and other academic outputs. Students who submit AI-generated content without any review face the risk of committing plagiarism and infringing intellectual property laws. There is also a concern that the use of AI tools will erode academic integrity, reduce critical thinking and impact on the ability to retain information.

In an attempt to curtail the widespread use of generative AI tools in academia, we have witnessed the rise of AI detection platforms, designed to identify the extent to which content may have been generated by AI tools. Staff should be trained not only on how to use AI detection tools but also to recognise common telltale signs of AI-generated content, such as overly uniform sentence structures, generic phrasing or unusual punctuation patterns such as the overly used 'em' dash.

Institutions should also implement comprehensive AI policies that clearly define acceptable and unacceptable uses of AI. These policies should be communicated through practical guidelines, such as a "dos and don'ts" list and reinforced with practical training. By combining AI tools with human judgement and clear institutional standards, schools and universities can safeguard academic integrity while acknowledging the evolving role of AI in education.

Data privacy

Children's personal information is protected as a unique category under the Protection of Personal Information Act, 2013 ("POPIA"). This means that POPIA generally prohibits the processing of children's data unless (i) prior consent is obtained from a competent person; (ii) the processing is necessary for the establishment, exercise, or defence of a right or obligation in law; (iii) the processing is necessary to comply with an obligation of international public law; (iv) the processing is for research purposes; or (v) the processing is of personal information which deliberately been made public by the child with the consent of a competent person. It is also important to note that the Information Regulator may grant a dispensation to process children personal information if (i) the processing is in the public interest; and (ii) appropriate safeguards have been put in place to protect the personal information of the child (subject to any reasonable conditions which the Information Regulator may impose).

The integration of AI tools into classrooms introduces new risks. Many AI platforms require students to input personal information, which may be shared, stored or processed without proper consent or oversight. This raises concerns about compliance with POPIA, particularly where children's personal information is involved. Without clear safeguards, institutions risk potentially exposing sensitive information to third party service providers, leading to data breaches, unauthorised profiling and/or misuse of children personal information. Academic institutions must therefore not only comply with POPIA's consent and safeguard requirements but also implement stringent policies governing the use of AI tools, ensuring that students' personal information is never shared or processed without explicit consent and robust data protection measures being implemented.

Therefore, students and parents should understand when and how AI is used. Institutions should publish accessible privacy notices describing categories of AI use, data collected, processing purposes the role of human review, and contact points for questions and access requests. This notice should be written in plain-language and shared with parents and guardians and, where appropriate, obtain consent from a competent person for processing children's information. For optional AI features, affirmative consent can be obtained, and non-AI alternatives can be offered. In higher education, syllabi can state expectations for AI use, permitted AI tools, required disclosures, and academic integrity implications.

In late 2024, the Information Regulator brought an urgent application to interdict the Department of Basic Education ("DBE") from publishing matric results in newspapers, arguing that such publication, albeit using learners' examination numbers, constituted unlawful processing of children's personal information under POPIA. The Regulator had issued an enforcement notice directing the DBE to limit disclosure to schools and secure SMS platforms, but the DBE, supported by media organisations and civil society groups, defended the practice as longstanding, transparent and in the public interest. Ultimately, the matter was struck off the roll, siding with the DBE and noting that the Regulator's urgent application did not meet the threshold for relief. The Regulator has since warned that the ruling sets a precedent that weakens safeguards for children's personal information and undermines its enforcement powers under POPIA.

Admission into academic institutions: Automated Decision-Making (ADM)

Using AI to automate admission into academic institutions carries significant legal and ethical risks, particularly around fairness, transparency and compliance with constitutional and data protection laws. If not carefully regulated, such AI tools could reinforce inequality, discriminate against marginalised groups and undermine trust in academic institutions.

The risk of harm: Garcia v Character.AI case study

In the recent case of Garcia v Character.AI, an AI chatbot persuaded an adolescent to commit suicide after engaging in discussions around harmful topics such as suicide. This case highlights the importance of ensuring that proper checks and balances, such as guardrails and policies, are deployed to prevent users from prompting and AI tools from responding to harmful, objectionable and inappropriate content. This is especially so, when an AI tool is publicly accessible and is made available to children. Academic institutions will need to establish a clear policy on AI tools, addressing inter alia, which tools may be used by students, clear guidelines on the dos and don'ts when students use AI tools, etc.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More