Corporate legal departments are increasingly receiving requests from their business clients to use GPT in their operations - the current demand is clear. In a recent TR survey*, 51% of lawyers surveyed said GPT should be used in legal work and 72% said it should be used for non-legal work.

The quality of GPT is hugely essential for its use - GPT-4 passed the Uniform Bar Exam in the US, getting 76% of the answers correct and outscoring the average human taking the exam. Passing the exam requires deep legal knowledge, reading comprehension, and the ability to write - it's the first AI to pass the Exam.

With rising costs and inflation impacting the world, law firms remain under intense pressure to reduce operating costs, especially the increase in compensation costs lawyers have seen in the past 2 years. On the face of it, GPT could save on both significantly.

GPT is improving with every version, with GPT-4 recently passing the same Uniform Bar Exam that GPT-3.5 failed 6 months ago. Some legal businesses are already investing in GPT for their legal business and their lawyers are using it. China is requiring its lawyers to become familiar with the use of GPT in the practice of law. Therefore, its important to note that the longer you leave the technology for others to experiment with, the more time and resources you have to invest to catch up.

(*Thomson Reuters surveyed partners and lawyers in 443 large to mid-size law firm in the UK, US and Canada in March 2023).

What are the risks and should we worry about them?


A legal insurer has warned firms of significant legal risks associated with GPT and has also recognised that AI cannot be ignored by lawyers in the conduct of their practice.


No generative AI tool has made its training data public, so we have no indication whether training data sets display accurately or reliably: there have been some reports of inaccurate or 'hallucinated' answers. Depending on the nature of the training data used, the system could demonstrate bias in situations such as reviewing candidate CVs for recruitment selection.

Legal risks

Confidentiality: users should assume that everything going through GPT, and everything that it outputs in response, will be available to other users.

IP: GPT's answer might contain unlicensed extracts from copyrighted works and otherwise resemble other intellectual property belonging to third parties.

IP: users should not assume they will be able to register IP protection for works or inventions created or assisted by an AI.

Privacy: Personal data input into generative AI tools may become part of the AI model itself and then appear to other users.


Italy briefly ordered OpenAI (the owner of GPT) to stop processing Italian users' data until it complied with GDPR. Other countries have announced plans to regulate AI (UK) or restrict its use in areas such as medical devices and autonomous vehicles (EU).


GPT can help identify patterns associated with a cyber attack and help resolve it.

On the flip side, criminals can use GPT to find vulnerabilities in code, which they can exploit.


While the tool's accuracy over time will increase, it should never be used as a substitute for critical thinking. The legal risks create significant challenges for the use of GPT in legal work unless you understand the data with which your generative AI has been trained and you can control its output (as with Harvey, which A&O and PwC use). Subject to regulatory restrictions that may be imposed, there are fewer risks connected with the use of GPT in non-legal work.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.