OpenAI, the creator of the (in)famous ChatGPT, recently announced that its chatbot will now be able to access more up-to-date information. The OpenAI chatbot will now have knowledge of the world up to April 2023. When the chatbot was first released in November 2022, it only had access to information until September 2021. Despite the "informational black hole" that previously existed, ChatGPT still reached 1 million users within five days of its launch. With OpenAI's most recent announcement, it is likely that ChatGPT's popularity will again increase. Although exciting, the use of chatbots is also something that employers should monitor in their workplaces, particularly where it is unsanctioned.
Why should employers be worried?
In recent months, multiple employers, including global tech giants, have been the subject of "conversational AI leaks". "Conversational AI leaks" is a phrase used to describe a loss of data where a chatbot is involved. These leaks involve incidents where sensitive data/information, which is fed into chatbots like ChatGPT, is unintentionally exposed. When information is disclosed to chatbots, the information is sent to a third-party server and is used to train the chatbot further. What that means, in simple terms, is that the information input into the chatbot may be used by the chatbot in the future generation of responses. This becomes particularly problematic when the chatbot has access to and is using confidential, sensitive or personal information that should not be publicly available. In this regard, there are numerous examples of employees having accidentally disclosed sensitive employer information whilst making use of publicly available chatbots, without their employer's knowledge, to conduct their employment-related duties. Examples of these conversational AI leaks include where employees used the chatbots to (i) identify errors in source code; (ii) optimise source code; and (iii) generate meeting notes from an uploaded recording.
With the growing popularity of chatbots, conversational AI leaks have become increasingly more prevalent. IBM's Data Breach Report 2023 states that between March 2022 and March 2023, the global average cost of data breaches reached an all-time high of USD4.45 million and for South Africa specifically, it exceeded ZAR50 million. The cost of conversational AI leaks can be crippling for an employer and as a result, employers should be front-footed in their approach to the use of chatbots in the workplace.
How can employers regulate and control the use of ChatGPT in the workplace?
An employer's knee-jerk reaction may be to prohibit the use of ChatGPT for specific work-related tasks or queries. However, there are alternative options available to companies aiming to leverage the benefits of ChatGPT responsibly and to implement Responsible AI into the workplace. Such options may include: (i) procuring an enterprise licence for ChatGPT; or (ii) in the absence of any specific laws or regulations, employers may opt to self-regulate the use of AI tools, like ChatGPT, through policies and training interventions.
OpenAI has now launched an enterprise version that allows individuals and employers alike to not only create their own chatbots but to ring-fence the information on which they are trained. The intention behind the creation of the enterprise version is that theoretically, information input into the chatbot is not used for publicly available versions of the tool. The version is also said to offer enterprise-standard security and privacy. The introduction of this paid version of the tool has the potential to reduce the risk of conversational AI leaks.
There is no one-size-fits-all approach in AI risk management and the approach adopted will largely depend on the extent to which AI is incorporated into an employer's operations. However, regardless of the approach adopted, there is always a risk that employees will turn to non-approved AI tools for assistance. If employers do not take a proactive stance in regulating ChatGPT's usage in the workplace, it could lead to a situation of "shadow IT". Shadow IT describes a situation where employees use software or tools that have not been officially approved by the employer, leading to an unsanctioned IT environment existing in parallel to the employer's approved IT infrastructure and systems. The problem with this is that there is no internal regulation, security or governance over the shadow IT which may expose the employer to security vulnerabilities, data leaks, intellectual property disclosure, and other issues. So, employers and employees should remain cautious of which generative AI tools they use, where they source their information from and what information is being shared in that process.
Accordingly, the learnings from the conversational AI leaks we have seen to date should be:
For employers:
- Take a proactive approach to regulating the use of generative
AI in the workplace by:
- Procuring enterprise version licences; and/or
- Implementing internal policies and procedures to regulate organisational use of generative AI.
- Review your contracts with AI service providers to ensure that you adequately protect your intellectual property;
- Ensure data security is your top priority and provide generative AI with information on a need-to-know basis;
- Ensure you customise your personalised chatbots responsibly and ethically;
- Train employees on how to use chatbots responsibly;
- Monitor chatbots' compliance with privacy regulations and data protection measures as well as against internal employer policies; and
- Implement and maintain internal employer policies to regulate
the use of generative AI in the workplace, regulating, inter
alia:
- Authorised generative AI systems;
- Acceptable use;
- Prohibited activities, such as sharing of personal or confidential information;
- Data protection;
- Intellectual property; and
- Liability and disciplinary procedures.
For employees:
- Do not use AI tools that have not been approved by your employer for work purposes;
- Do not share personal, proprietary, or confidential information with chatbots;
- Do not upload any employer intellectual property (copyrighted material, such as data, documents, or source code);
- Understand the legal, commercial and technical risks associated with the use of chatbots and also policies which may be implemented by the employer;
- Confirm the accuracy of chatbot responses, particularly where the responses may influence critical decisions (in other words, ensure a human being vets the responses of the chatbot and applies their mind to the output);
- Implement processes to monitor and prevent data bias and discriminatory outputs being generated by the chatbots;
- Familiarise yourself with acceptable chatbot usage and emerging standards, guidelines, and frameworks on ethical and Responsible AI; and
- Report any security or privacy concerns when using chatbots.
The above takeaways are useful to consider when attempting to harness the potential of the ever-evolving generative AI space, whilst simultaneously preserving data privacy, security and intellectual property.
ENS' team of expert Technology, Media, and Telecommunications lawyers as well as labour law experts have developed a 'Responsible AI toolkit' to assist clients in fast-tracking entry into and navigating the world of AI.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.