The Tool,Not The Answer: AI In Disputes

IG
IR Global
Contributor
IR Global is a multi-disciplinary professional services network that provides legal, accountancy and financial advice to both companies and individuals around the world. Our membership consists of the highest quality boutique and mid-sized firms who service the mid-market. Firms which are focused on partner led, personal service and have extensive cross border experience.
About 32% of Australian employees use some form of Generative AI, but about two-thirds of those believe their manager does not know.
Australia Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Q1 Is generative AI already being used in dispute resolution in your jurisdiction?

About 32% of Australian employees use some form of Generative AI, but about two-thirds of those believe their manager does not know. Only 9.5% of large Australian businesses (those employing over 200 employees) have officially adopted AI. This drops to 1.4% among all businesses in Australia.

For lawyers, the numbers are even more limited in that:

  • Only 15% of Australian law firms plan to adopt AI within a year;
  • To date, the use of AI has been limited to processes such as TAR (Technology Assisted Review) in case such as McConnell Dowell v Santam Ltd (No 1) [2016] VSC 73, where TAR was used to limit the number of documents on disclosure of documents in litigation;
  • TAR has also been used by Qld solicitors in a number of cases including Parbery v QNI Metals Pty Ltd [2018] QSC 180; Santos Ltd v Fluor Australia Pty Ltd (No 4) [2021] QSC 296 and Golden Vision Gold Coast Pty Ltd v Orchid Avenue Pty Ltd [2022] QSC 49, [142]-[143].

It has been reported that many lawyers across Australia have plans to implement the use of AI with some reservations including:

  • Some larger firms believe ChatGPT is not ready to provide complex advice but may be able to do case summaries or TAR;
  • Allens is reported to have invested in a regulatory tech start-up called Red Marker to automate legal processes;
  • In 2016, Corrs Chamber Westgarth were reported as having taken a 50% stake in an AI automation AI start-up that uses technology assisted narrative review to analyse contracts, identify their important clauses and works within company policies and the country's regulations;
  • Norton Rose Fulbright is reportedly using an AI chatbot to give basic answers to clients' questions about changes to law of data protection and privacy (specifically the changes to legislation). The kinds of Legal Generative AI tools that currently exist include:
  • Amica, a government-funded tool that helps with family law disputes: the program is designed for separating couples who are still amicable.
  • Spellbook: a website fuelled by ChatGPT4 that purports to have specialised legal knowledge;
  • ClauseBase: a drafting software for legal experts that is experimenting with AI and smart templates and full doc automation.

Q2 How do you feel about generative AI in disputes?

Could ChatGPT or related applications pose a faster, fairer dispute resolution process, or does it pose a serious risk to the resolution process?

The advantages of using generative AI in disputes includes:

  • Time and cost saving
  • Generative AI could be used as a filter, dealing with the less complex cases so judges and other legal professionals have more time with emotionally challenging and factually or legally complex cases
  • AI has endless uses in legal practice beyond merely asking it questions of law or predicting legal outcomes
  • Unlike humans, AI will not get tired or hungry

The disadvantages of using generative AI in disputes includes:

  • The risk of a client information leak– AI will require extensive training and massive amounts of data
  • Chat GPT explicitly collects data and may share it with third partieswithout notice
  • 'Hallucination', the phenomenon of AI creating entirely false answers.

Q3 How is Australia preparing to tackle bias and transparency in generative AI tools?

Australia has 8 AI Ethics Principles; they are a voluntary and aspirational framework:

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment
  • Human-centred values: AI systems should respect human rights,diversity, and the autonomy of individuals
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems

AI is subject to existing regulation in Australia including:

  • The 2023-24 federal budget provided $101.2 million for new tech including quantum and AI
  • A discussion paper called "Safe and Responsible AI in Australia" was published and the public was called upon for responses that address gaps in the research and legislation. Over 500 responses were received and 447 were published
  • The Office of the Information Commissioner created and sent a submission about AI to the CSIRO, demonstrating communication and cooperation between governmental bodies.

Of interest is that the Full Federal Court of Australia has decided that AI is not a legal person so is not capable of holding patents.

Australian International Commitments Concerning AI including the signing of a new OECD principle on AI, including:

  • To facilitate investment, both public and private in research and development;
  • To create an nvironment for ethical AI to grow
  • To support workers through the AI transition
  • To co-operate internationally in the creation of standards, in sharing information and in working towards responsible AI

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

The Tool,Not The Answer: AI In Disputes

Australia Technology
Contributor
IR Global is a multi-disciplinary professional services network that provides legal, accountancy and financial advice to both companies and individuals around the world. Our membership consists of the highest quality boutique and mid-sized firms who service the mid-market. Firms which are focused on partner led, personal service and have extensive cross border experience.
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More