Imagine a world where you can create anything you want with just a few words or clicks. Welcome to the era of generative AI.

The latest advancement in AI comes in the form of large language models ("LLMs") (e.g. ChatGPT; chatbots) that can generate advanced human-like text.

Some are comparing these LLMs to the invention of the printing press in terms of their importance to the world, and it is easy to see why. In January 2023, just two months after its launch, ChatGPT was estimated to have reached 100 million monthly users making it the fastest growing consumer application in history. It is reported to have crossed the 1 billion users line in March 2023.

LLM is more than ChatGPT. The market is full of options such as Microsoft's Bing, Google's Bard, DeepL Write, Marmof, Writer, Twain, ChatSonic, YouChat, Perplexity AI and more.

Latest Developments

As noted in our recent article "Artificial Intelligence Act - Take Your Positions Please", the green light was given on 11 May 2023 to the EU Artificial Intelligence Act by the European Parliament's Civil Liberties and Internal Market committees.

Notably, it was agreed, by a large majority, to ban the use of facial recognition in public spaces, predictive policing tools, and to impose new transparency measures on generative AI applications like ChatGPT.

MEPs included obligations for providers of foundation models who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Under the proposed AI Act, generative foundation models, such as Chat-GPT, will have to comply with additional transparency requirements, including:

  • disclosing that the content was generated by AI;
  • designing the model to prevent it from generating illegal content; and
  • publishing summaries of copyrighted data used for training.

By way of example of a real-time risk of using AI tools, this month a national newspaper, the Irish Times, was quick to apologise and take down a contentious opinion piece about Irish women using fake tan being problematic. The article suggested that the use of fake tan by Irish women was cultural appropriation.

However, within 24 hours, the article was found to be fake. AI-generated, it had slipped through unnoticed. The incident highlighted the challenges of using innovative AI tools and the need to have the right protections in place unless you want to get it "badly wrong" (as an Irish Times editor is reported to have said).

Though there are risks, there are also advantages of LLMs.

What are the advantages of LLMs?

LLMs can, for example, help Financial and Pharmaceutical businesses to improve efficiency, increase productivity and make data-driven decisions. Highlighted below are a few deployment possibilities noted from service providers:

The Financial Sector

  • Customer Service: customer queries can be dealt with via LLM;
  • Automation: routine tasks such as on boarding new customers can be done by AI;
  • Fraud Detection: LLMs have the ability to analyse vast amounts of complex data to identify patterns and anomalies that can be used to detect money laundering or fraud;
  • Risk Management: potential vulnerabilities can be identified leading to better decision making by financial institutions, mitigating risks and avoiding financial loss; and
  • Marketing: LLMs can produce articles, blog posts and advertisements.

The Pharmaceutical Sector

  • Patient Engagement: modern medical systems are fighting a losing battle to meet patient engagement needs. Virtual assistants developed by LLMs can provide patients with personalised care, including medication management and symptom tracking. This could improve patient outcomes and reduce the burden on healthcare providers;
  • Drug Discovery: LLMs have the ability to analyse vast amounts of complex data to identify patterns which could inform drug discovery;
  • Clinical Trials: improve patient recruitment and clinical trial design; and
  • Improved Treatments: LLMs can analyse data to predict potential risks and outcomes of treatments, allowing for improved decision making from professionals

What are the top considerations when choosing an LLMs?

LLMs are not without their flaws and there are some important factors to consider when using, or implementing LLMs within an organisation or business process.

Privacy and Security

LLMs may use personal data to train or generate content without consent or protection. This may also create risks of data breaches, identity theft, bias or discrimination for the individuals whose data is used. Also, users must ensure robust security measures are employed. Generative AI has the potential to be used both to support and also hinder cybersecurity (e.g. phishing attacks).

Read more

Accuracy and Ethics

LLMs are known in some instances to be 'confidently incorrect'. Sam Altman, CEO of OpenAI, has warned that "it's a mistake to be relying on it [ChatGPT] for anything important. It's a preview of progress; we have lots of work to do on robustness and truthfulness."

Read more

Intellectual Property

LLMs may copy or rephrase existing content from other sources without permission or attribution, violating the rights of the original authors or owners. This may also create plagiarism issues and unauthorised derivative works. Some tools require the user to mention the output was generated in part using generative AI models.

Misleading Representations

LLMs may generate false or inaccurate content that can mislead or deceive consumers (e.g. deepfakes). This may also create liability issues for the creators or users of generative AI if they fail to disclose the use of AI or verify the accuracy of the content.

What are some measures a company could take to help manage the risk?

If you are considering implementing AI-related tools in your business, and in anticipation of the AI act some immediate steps you could take are as follows:

A. Conduct a Comprehensive Legal and Ethical Assessment

When carrying out the assessment, legal and compliance teams should be engaged to assess the legal and regulatory AI landscape specific to your industry and jurisdiction. The review would typically include: identifying challenges encompassing data protection, security, intellectual property, and consumer protection laws; evaluating ethical considerations like bias, transparency, and accountability; and conducting ongoing monitoring and auditing to identify any emerging issues and ensure consistent compliance.

B. Establish Robust Governance and Risk Management Frameworks

This would include the development of clear company policies and procedures governing the use of LLMs. Depending on requirements, the company may consider establishing an oversight committee comprised of experts to monitor and manage the risks. This is already anticipated for aiding compliance with the AI Act. Also, AI is generally as effective as the data used to train it and so an oversight process based on thorough monitoring to validate the outputs, thresholds, and other aspects of the system could help maintain its overall accuracy and efficiency.

C. Implement Strong Data Security and Privacy Measures

This would include assessing the security protocols and infrastructure required to protect data used by LLMs, together with revisiting the company's data protection measures.

D. Conduct Training

It is important to build integrity into AI systems and tools at the design stage (similar to privacy by design, by default). Personnel employed to support the implementation and use of those systems and tools will need appropriate training on the risks and their compliance obligations.

AI should be used in light of the organisations codes of conduct, values and applicable legal limits from the get go.

It is important to remember this type of emerging technology is still in its infancy and has drawbacks as would be expected.

Many risks in relation to data protection are being uncovered and we predict that this will expand to other areas like intellectual property and AI regulation. It is hoped that with the suggestions above will help you navigate the risks before incorporating generative AI tools into your live production environment.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.