These days, the big fear amongst most professions, and particularly those active in the legal field, is that Artificial Intelligence (AI) will render their jobs obsolete. It is no secret that there are already applications and websites used indicatively to help draft contracts, predict legal outcomes, and recommend judicial decisions regarding sentencing and bail. It is also true that many of these are accessible by any user. In other words, AI seems to be gathering all the ingredients to "replace" humans and take over the roles of not only industrial workers, but also of professional ones.

A major alarm was sounded recently for lawyers when it was announced that the first "robot lawyer", created by DoNotPay Inc. Company, was set to go to a US court and help fight a traffic ticket on February 22nd of 2023 (26/01/2023 Megan Cerullo AI-powered "robot" lawyer won't argue in court after jail threats) . This was to be achieved by it listening to court arguments and producing responses in real-time, through headphones, for the defendant to repeat to the court. However, the experiment did not proceed after State Bar Prosecutors threatened the creator of the chatbot (Joshua Browder) with prison time if it went ahead. It should be noted that the use of such technology is illegal in many courtrooms, often because there is a legal requirement for all parties present to consent to being 'recorded', something which is not generally feasible. So long as such restrictions remain, the product is unlikely to be commercialized in this manner any time soon.

Interestingly, DoNotPay Inc is also facing a class-action lawsuit filed on 3 March 2023 for presenting the same "robot lawyer" as a licensed lawyer (12/03/2023 Stephanie Stacey 'Robot lawyer' DoNotPay is being sued by a law firm because it 'does not have a law degree'). The lawsuit was filed on behalf of Jonathan Faridian, who used DoNotPay to buy various legal documents thinking that they had been drafted by a competent lawyer. In reality this was not the case, and Mr. Faridian claims that the documents he received were substandard.

Meanwhile and by way of contrast, in Columbia, judge Juan Manuel Padilla openly announced that he had used ChatGPT in preparation for his ruling rendered on 30 January 2023 (03/02/2023 Luke Taylor Colombian judge says he used ChatGPT in ruling). The ruling concerned the case of a request to exonerate an autistic child from paying fees related to medical appointments. Though the judge is not facing any legal proceedings as a consequence of this admission, his announcement instigated a wave of criticism.

From the forgoing it seems that the answer to the question 'Can AI replace lawyers?' is currently 'No'. In our opinion, although AI technology can surely be helpful there are still important issues that are yet to be resolved, including the following:

  1. Embedded bias and threat to fundamental rights:

AI systems make use of machine learning. Machine learning involves the analysis of vast quantities of data. Good quality output from machine learning relies on the input of good quality data. However, it is too commonly the case that the data sets provided for a machine learning exercise are tainted by societal inequalities and biases. Therefore, if tainted data is used this will be reflected in the supposedly impartial machine learning output and any AI which makes use of it is likely reproduce the same biases in its output. A classic case of garbage in, garbage out. A simple example is that, if AI is only given a certain perspective, it will consider a different one as being incorrect.

To counter this problem, the 'Proposal for a Regulation laying down harmonized rules on artificial intelligence' (the "Draft EU AI Act") aims to limit AI risks relating to replication of discrimination that arises from human bias. Recital 17 of the draft EU AI Act[1] notes that AI systems used by public authorities for general social scoring of natural persons may lead to the violation of non-discrimination and the right to dignity as such AI systems classify people based on their social behavior which can perpetuate harmful stereotypes. One example of the controversial use of AI filled with human bias is when applications such as COMPAS, are used to advise criminal judges on bail and sentencing decisions. According to a ProPublica study (23/05/2016 by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica Machine Bias There's software used across the country to predict future criminals. And it's biased against blacks.) , such assessment tools seem biased against black prisoners, disproportionally presenting them as being more likely to reoffend than white prisoners.

Other than human bias, AI can potentially also threaten our fundamental rights. The AI Act is aiming for harmonized rules on AI which will respect existing laws on fundamental rights. To this regard, it has put in place some safeguards with a risk-based approach meaning that it imposes regulatory burdens when AI is threatening fundamental rights (high risk). Furthermore, Title II establishes a list of prohibited AI practices. Such practices cover instances where there could be manipulation of persons through subliminal techniques beyond their consciousness or where vulnerable groups such as people with disabilities or children would be prone to be influenced and exploited or caused harm to. These safeguards are critical in relation to the power and potential of AI.

  1. Inaccuracy of AI tools as well as lack of emotion and creativity:

AI systems operate on the basis of the most recurrent information fed in their training data. This does not necessarily make the outcome correct since trustworthy information does not equate to most viewed sites or most repeated information.

Moreover, such AI systems lack the subtle understanding of legal concepts, the specific context in which a contract will be used, the ability to negotiate with other parties, the human contact and compassion that lawyers possess. These are factors which make AI less appealing than humans in these areas and especially in sensitive matters. They also currently lack the ability to create "new" solutions. In other words, AI cannot think "outside of the box" in the way that humans can.

  1. EU Directive imposing a "human quota":

One of the key objectives of the proposal for a Directive of the European Parliament and of the Council, on improving the working conditions in platform work, is to ensure that, in intensely digitalized environments, there will always be human control and a minimum level of human contact. It aims to establish a sort of "human quota", thus protecting employees from being essentially replaced by robots. Human control is positioned as a crucial, and fundamental, guarantee mechanism in order to make sure that automated decisions are correct and, to adjust them if necessary.

  1. AI paradoxically generating more work:

As AI evolves, it is envisaged that there will be increasingly "expert AI systems" that would conduct high level tasks such as medical diagnosis, genetic engineering, airline scheduling and so much more. But what if those systems are defective resulting in harm to third parties? Or, even if they work perfectly correctly, what if such systems lack 'common sense'. A great example is the one of MYCIN (28/09/2016 Abbott Ryan I Think, Therefore I Invent: Creative Computers and the Future of Patent Law), an expert system for treating blood infections which attempted to diagnose patients based on their reported symptoms. If the system were faced with a patient who received a gunshot (and therefore was possibly bleeding to death), it would still attempt to diagnose a bacterial cause for the patient's symptoms. Furthermore, these systems can also make absurd errors such as prescribing a clearly incorrect dosage of a drug for a patient.

The European Commission has drafted a proposal for a directive on non-contractual civil liability rules relating to artificial intelligence (The AI liability Directive). The aim of this Directive is "to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU". Hence, there would be a rebuttable presumption of causality in order for the victims of damage caused by AI to be able to get damages more easily (burden of proof made easier). This new Directive and the exponential use of AI in future years could create another field of expertise for lawyers. Thus AI, rather than replacing lawyers, could actually generate more work for them!

Besides, AI systems are always available in contrast to humans who work 8 hours per day, echoing in more profit for companies and consumers having the best experience possible.

Conclusions

Despite the concerns and issues raised above, AI remains a very useful tool which is here to stay. It is surely better for lawyers and other professionals to embrace it than to fear it. Since AI allows for streamlining of repetitive tasks, it has the potential to free more time for professionals to spend on the higher-skilled and more rewarding aspects of their jobs. AI could even be used for dangerous tasks, removing the risk of human injury in, for example, areas of high radiation or landmine deactivation( https://www.tableau.com/data-insights/ai/advantages-disadvantages#:~:text=The%20advantages%20range%20from%20streamlining,lack%20of%20emotion%20and%20creativity. ). Since AI is slowly but surely becoming an integral part of our lives, it is best to find ways in which to work with it. Sensibly used, AI could compliment human activities and help people to become more efficient and generate more profit.

However, it must be noted that many questions related to the use of AI remain unanswered. A big question for the legal profession currently, for example, is whether lawyers' insurances would be willing to cover cases involving AI use? As the two DoNotPay Inc. cases highlight, the use of AI in the legal sector is an exceedingly hot topic and these are unlikely to be the only cases where the use of AI is an issue. The outcomes of such cases must be closely scrutinized as they will play a major role in determining how the future use of AI evolves.

Footnote

1 https://artificialintelligenceact.eu/the-act/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.