In November, the US Court of Appeals for the Fifth Circuit proposed a new rule requiring lawyers to certify that they either did not use generative artificial intelligence (AI) programs, like Chat GPT, to draft filings or that humans reviewed AI-generated materials. The purpose of the rule is to ensure that the use of AI has not been integrated into legal work. Moreover, the proposition will hold lawyers accountable and be subject to full transparency.

Lawyers using artificial intelligence may create problems such as "hallucinations," which are incorrect outputs that could potentially lead to tort liabilities, consumer harm, or regulatory breaches. Furthermore, a lack of transparency can create unpredictability, making it difficult to check whether the model meets standards of quality and accountability. AI models tend to give multiple answers to the same question creating response divergence. In an attempt to mitigate these risks in the legal field, researchers have proposed XAI models that can provide reasoning behind the generated predictions, to permit users to distinguish whether AI is right for the correct reasons.

The rule explains that a lawyer who misrepresents their compliance with the court could have their filings stricken and sanctions imposed against them.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.