30 August 2023

The ChatGPT takeover: the potential use (and misuse) of AI in education

Carroll & O'Dea


Established over 120 years ago, Carroll & O’Dea Lawyers offers expert advice and strong advocacy for clients. With a commitment to high-level service and legal expertise in all areas, they blend tradition with modern skills.
Discusses limitations and opportunities of ChatGPT and legal issues that may arise from adopting AI in schools.
Australia Consumer Protection
To print this article, all you need is to be registered or login on

Since the launch of ChatGPT at the end of 2022, it is increasingly clear that AI technologies and computer-generated works (including content produced by ChatGPT) are here to stay.

It isn't just ChatGPT either. AI art generators such as NightCafe are also gaining traction.

Recently, the Chrome web browser phone application was updated to include a 'homework helper' extension that will solve maths problems, with working, for you. Seemingly overnight, these technologies have become commonplace.

While AI may not replace most teachers, or doctors, or lawyers (yet!), many of us are working with these technologies already, or working towards implementing them. In education, much of the discussion has been around the potential misuse of AI chatbots or art generators - for instance, by university and school students 'cheating' by using them to produce assignments, essays, images, or even to just answer simple homework questions. This article discusses the limitations and opportunities of ChatGPT and the legal issues that may arise from adopting AI in schools and workplaces.

State School response

Following concerns that ChatGPT will be misused by students, NSW became the first state or territory to ban ChatGPT in public schools ahead of Term 1 this year. NSW public school students are restricted from accessing ChatGPT from school. Queensland, Western Australia and Tasmania have followed suit.

In early February, Victoria decided to ban students and staff from using ChatGPT by blocking it from all public school servers and devices.

However, ChatGPT can be used in South Australian public schools, in some circumstances and with safeguards in place. For instance, it is intended that ChatGPT will be 'blocked' during exams, but at other times students will be taught how best to use it, when to use it and how to sort through 'misinformation' and 'disinformation' in ChatGPT content.

Limitations of ChatGPT

That last point is critical - ChatGPT can already write convincingly and with authority. However, that doesn't mean what it says is true or accurate.

Sometimes, answers will be completely incorrect, biased or even just nonsense. We also need to keep in mind that ChatGPT finished training in early 2022. As it was largely trained on data from 2021, it is not up to date on the most recent events.

While concerns relating to misuse of ChatGPT are valid, we should remember that students going into the workforce are likely to be using some form of AI technology - for copywriting, graphic design, marketing, customer service and even software engineering.

These AI technologies require us to think of better ways to test learning. Clearly there are ethical issues with claiming AI generated work as your own, as well as concerns from educators that by potentially avoiding 'doing the work' by using AI technologies, students aren't really learning. It seems that there are currently no foolproof ways of detecting ChatGPT.

Potential Uses in Education

AI technologies provide an opportunity to consider how they can be used to aid learning, to support and assist our students with disability or additional needs and to make teachers more productive, by relieving some of the administrative and marking burden on them.

Many social commentators think that more repetitive jobs and tasks such as data entry will be overtaken by ChatGPT and other similar applications but suggest there will always be a place for people who are trained to use these technologies in industry.

If this is the case, then should that training not start at school in a controlled environment?

Legal Perspective

We consider there are a few legal issues with simply adopting the new technology in schools and workplaces: intellectual property and employment, misleading and deceptive conduct, and privacy.

Intellectual Property

According to current Australian law, computers cannot be the author of a work that attracts copyright protection. Under the Copyright Act 1968 (Cth), a work must be 'sufficiently original' to be protected, which requires an exertion of human skill, independent effort, creativity and 'sweat of the brow'. In short, for there to be copyright protection, the work must have a human author.

Similarly, the Full Court of the Federal Court of Australia recently concluded that 'AI technology', as a computer and a non-human, could not be considered an 'inventor' for the purposes of a patent application.

If educators use AI to produce content for lesson plans or other activities, it is important to keep in mind this work may not be protected by copyright. It may also be impossible for the user of an AI chatbot to determine whether the chatbot has copied another person's work - leading to an inadvertent breach of copyright or other design rights.

It is clear the law in Australia does not recognise ownership of intellectual property by AI. While ChatGPT and NightCafe may state you own any content created by the AI technology in response to your prompts, this isn't consistent with the Australian intellectual property law.

Unfortunately, the law in this space has not kept up with recent developments in technology and was certainly not drafted with AI in mind. These laws will need to be revisited as AI technologies become increasingly common.

Misleading and Deceptive Conduct

As we mentioned above, material produced by ChatGPT may be incorrect and may accordingly be misleading or deceptive. If this material is used by a school (for example, in its marketing publications), the school could find itself in breach of the Australian Consumer Law.


Because ChatGPT draws upon data that contains personal information, it is quite possible for the material it creates to contain personal information. School personnel who use ChatGPT to create material may well find themselves in breach of the Australian Privacy Principles under the Privacy Act 1988 (Cth) by virtue of their collection of this personal information and their use of it.


If you publish material produced by ChatGPT or NightCafe that is not only false but is also likely to damage the reputation of another individual, small business or a not-for-profit organisation (such as a school) you may find yourself facing an action for defamation. It remains to be seen whether the defence of innocent dissemination will be available to anyone who unknowingly publishes a defamatory ChatGPT article or a fake image produced by Night Cafe.

Take Action!

We suggest schools and universities thoroughly review:

  • their employment contracts as the provisions about ownership of materials created in the course of employment may need some change;
  • their privacy policies;
  • their processes for checking marketing material;
  • their academic misconduct and student behaviour policies;
  • how they intend to test students' learning.

While the potential uses of AI technologies cannot be denied, we must first deal with the potential misuses and legal risks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More