As artificial intelligence ("AI") has become synonymous with innovation and cutting-edge technology, LLM is no longer an abbreviation synonymous with a law student studying a Master of Laws. Large language models ("LLMs") are a buzzword in the AI space now.

LLMs are a type of AI that assists in generating human-like responses to complex queries. These models have caused the current generative AI boom. LLMs have been around for almost a decade but recent advances have gained phenomenal and worldwide attention.

We explore the following questions below:

  1. Why have LLMs taken the world by storm – is it just hype?
  2. How does a business leverage the benefits of LLMs safely and responsibly?
  3. What method should a business use to bring AI into its operations - adopt, deploy or develop AI?
  4. Who can access sensitive business data – can it end up on the dark web?
  5. When can a business rely on the output – can LLMs become a master of the law?

As we explore the potential of leveraging LLMs safely and responsibly within a business, such as change management, AI leadership strategy and how to prepare your information and workforce for an AI-embedded world in our second article, we must also consider the uncertainty of this emerging trend in order to understand the need to implement it cautiously.

  1. Why have LLMs taken the world by storm – is it just hype?

Goldman Sachs Research predicts that generative AI "could drive a seven percent (or almost USD7-trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period." An MIT study conducted across a group of college-educated professionals showed more than a 40% increase in productivity and nearly 20% increase in quality in relation to writing tasks. These statistics alone are enough to make any business take notice. Conversely, more recent studies have shown high percentages of incorrect results emphasising the risks surrounding over-reliance on AI for factual information, especially in fields requiring expertise such as legal.

LLMs have been around for almost a decade. Previously, chatbots relied on decision trees and rule-based coding. The more recent chatbots are based on LLMs that are powered by neural network-based language prediction models built on the transformer architecture. ChatGPT is a LLM powered by generative pre-trained transformers (GPT) that generate coherent and fluent text, making the responses more human-like. Together, they can understand complex information, identify entities and relationships in words, and generate new text that reads like a human wrote it.

These newer models can learn from significantly larger sets of data providing universal application. The general nature of the use cases, from children creating their own bedtime stories to developers fixing code, and various levels of availability to the public at no cost means anyone can interact with it to some degree.

The similarity between Google's Bard, Meta's Llama and OpenAI's GPT chatbots is the user's ability to type in a query and receive a human-like response. The responses will differ between the chatbots because they have been trained on different models and different datasets.

It has been more than a year since the explosion of generative AI, but there are still many unknowns on the use cases in business. These models may not generate new ideas, but they can free up employee time to do so.

The question is not if LLMs will be incorporated into business, but when and how.

  1. How does a business leverage the benefits of LLMs safely and responsibly?

There is uncertainty in how LLMs will be implemented safely and responsibly in business. It may feel more like adventuring into an African overland road trip without a map.

Similar to any emerging technology, there are the skeptics and the early birds. Change management is a critical part of any digital transformation and new technology implementation. A key success criteria for change management involves defining the scope and objectives and creating a clear roadmap.

It is important to understand the risks. Regardless of whether a company deploys or utilises AI technologies in the workplace, it should ensure that it has adopted mechanisms for Responsible AI interventions and that such interventions are led from the very top.

  1. What method should a business use to bring AI into its operations - adopt, deploy or develop AI?

There are various options that a business can use to implement and operationalise. Any company seeking to acquire AI capacity should determine its risk appetite and review its policies, such as those on privacy and data retention, to ensure that they are aligned to mitigate the risks associated with incorporating AI in the workplace. Some key challenges include regulating the use of AI and considerations on copyright.

The starting point will be to consider whether a business will adopt, deploy or develop its own AI solution based on its own investment, use case and risk appetite:

  • In-house development: This is where a company develops an AI solution internally within its business. Here the company will have full control over the scope of development, application, use, and regulation of the AI solution. It will also have complete ownership over all source/object code and all outputs generated by the AI solution. However, this process can be very costly and companies may not necessarily have the personnel and technical skills required to develop advanced AI solutions.
  • Outsourcing development: A company can contract a development company for a bespoke AI solution. The benefits of this approach are similar to that of in-house development. Here, the company as the client of the developer will be able to set out the scope of development, use cases, and the AI's application. The client company should own all sources/objects provided that the AI solution being developed is bespoke. The problem with this approach is that it is costly and there is a risk of project delays and scope creep. In addition, complexity could arise when a multi-tiered development process is used by the development company. In this development model, the development company would use its own source code or existing AI solution and further develop or customise the solution for the client. In this scenario, it becomes critical that the client retains ownership over any bespoke or customised development.
  • Licencing: A company can procure a licence to access and operationalise an AI solution. In this regard, a company will acquire a pre-built AI solution that has already been trained, tested, and refined. The company will not have ownership over the AI solution as it will be owned by the third party that developed it. The problem with this approach is that ownership over data and confidential information becomes problematic as the company will be supplying information to a third party AI solution. This approach also carries with it data privacy and protection concerns. There has been a sharp rise in investment in generative-AI startups. According to Pitchbook , venture funding has risen from USD4.8-billion in 2022 to USD12.7-billion in the first five months of 2023.

This will lead to a multitude of options and an overwhelming number of decisions to make.

  1. Who can access sensitive business data – can it end up on the dark web?

Aside from the hype of Google versus OpenAI and concerns around these companies monopolising the LLM market, it is impossible for any business to police this level of access to information. Cyber criminals have always been ahead of the curve in their use of technology and using LLMs is no different.

OpenAI's ChatGPT is nearing 1-billion users a month. According to Group-IB, 101 134 ChatGPT account credentials were made available on the dark web between June 2022 and May 2023. Almost 25% of these compromised ChatGPT credentials are geographically distributed in Middle East and Africa. There have been many reports of employees' inadvertently disclosing business sensitive information, including source code development and trade secrets being leaked to OpenAI's ChatGPT. Alongside the compromised ChatGPT user's personal information is the saved chat history. This chat history may now be accessible to cyber criminals and competitors through the dark web.

According to Reuters, Alphabet Inc is cautioning employees about how they use chatbots, including its own Google Bard chatbot. It is advising against entering its confidential materials into AI chatbots, citing long-standing policy on safeguarding information, and the direct use of computer code generated by chatbots.

When using chatbots, even for personal use, educating employees on changing passwords regularly, implementing two-factor authentication and deleting chat history are small ways to mitigate the serious risk of business sensitive data leakage. It is crucial to seek expert guidance to help navigate the complexities of safely adopting, or prohibiting, the use of technology.

We explore the risks concerning cybersecurity, availability and functionality, and data protection in more detail here.

  1. When can a business rely on the output – can LLMs become a master of the law?

An LLM's degree of reliability depends on its use case and the information it has been trained on. After all, an AI model is only as good as the data it is trained on.

There is no doubt that LLMs will greatly assist the legal world. It is also important to consider the guardrails around the use of any output of the tools. Attorneys citing fictitious cases cited by ChatGPT in court filings are well-publicised examples of the importance of understanding the limitations of any new technology.

Even where the AI has not made up information (often referred to as hallucinating), there is still potential harm in adopting its human-like content in a legal context. LLMs are more focused on predicting the next likely word. LLMs have been trained on a vast array of data, not just legal information, so there is a very real risk that it will select a word that is likely to have been used next in a non-legal context as opposed to concentrating on the next likely word a lawyer would have used.

The challenge here is that words are carefully curated by the legal profession. Whether it is a word used in a contractual clause or high court judgment, these words have often been negotiated, argued and defined at great length by legal professionals and the interpretation may vary on a jurisdictional basis. As such, the reliance on LLMs to draft agreements and legal documents could have disastrous, unintended consequences if not reviewed by legal experts.

Overcoming this challenge requires carefully curating an up-to-date and accurate legal data set to train the AI reliably, and a mechanism for its learnings to be updated as and when there are internal policy and procedural, legal and regulatory changes, or otherwise. A model trained on trillions of unvetted pieces of information, including social media, will not provide a reliable output in the drafting of legal documents. The training data must be "cleansed" of bias, refined from repetitive and irrelevant information and considered for privacy compliance, such as anonymising or removing personally identifiable information.

Conclusion

Leaders are faced with a great responsibility to carefully select how AI is integrated within their business. It is not just about defining the appropriate use case and assessing the ethical and secure usage but also the long-term impact it will have on society and future generations.

ENSafrica's team of expert Technology, Media, and Telecommunications lawyers have developed a Responsible AI toolkit to assist its clients in fast-tracking entry into and navigating the world of AI. Any further questions may be communicated to our team members:

ENSafrica's legal experts work closely with our in-house technology experts to help businesses start their digital journey and identify specific solutions for their existing problem or value drivers.

We provide some example steps that can be taken now here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.