As organisations start embracing the Artificial Intelligence ("AI") journey, beyond the use of more common AI tools such as ChatGPT and DeepSeek, they often face uncertainty, not only about whether a specific AI tool is fit for purpose, but also whether a new tool will deliver real-world value. It is fast becoming practice that organisations seeking to adopt AI tools do not take a "Big Bang" approach, which may be both risky and costly, but rather embark on a Proof of Concept ("POC"). An AI POC is a limited-scope project designed to test whether a specific AI tool can solve a specific business problem, allowing the organisation to take AI "for a spin" whilst also identifying the limitations and potential challenges associated with the adoption of the tool.
POCs are low risk in theory, but they carry significant legal and non-legal risks in practice. AI POCs can involve personal data, automated decision making, or third-party tools.
What Is an AI POC?
An AI POC is a trial run aimed at testing the technical feasibility and business value of an AI model. It also serves to assess whether or not the AI tool is being adopted from a place of utility that is fit for the organisation's purpose, as opposed to the tool being adopted from a place of "FOMO". For example:
- A bank might test whether a machine learning model can detect transaction fraud;
- A mining house may test whether an AI tool can increase efficiencies in the extraction process, or
- A retailer might test AI-generated customer profiles to improve product recommendations.
These projects are usually time-bound, use a limited dataset, and are not intended for production until such time as the POC has met certain specified upfront goals and success criteria. It also serves as a practical demo for users to determine whether the tool is fit for purpose or frustrating to use.
POCs are not without risk, and certain risk mitigation steps must be taken before embarking on a POC.
Legal Risks: Don't treat a POC like a Sandbox
- Data Privacy and Protection
AI POCs often rely on personal information or sensitive data to generate results. This could trigger compliance obligations under privacy legislations such as the Protection of Personal Information Act, 2013 ("POPIA") in South Africa:
- Lawful Processing: Pseudonymised or anonymised data should be considered as an alternative to utilising personal information. In the absence of a lawful basis for processing or further processing of personal information, adopting such methods may be a more practical option.
- Data Minimisation: The organisation may not collect or process more personal information than is necessary for the POC's limited objective
- Cross-Border Transfer: Cloud-based POCs using offshore servers may trigger cross-border transfer requirements and, in some jurisdictions, data localisation requirements.
- Intellectual Property (IP)
IP ownership in POCs is multifaceted and could extend to issues of ownership in the AI tool itself, in AI inputs and AI-generated output. It may also trigger further complexities, such as whether the AI tool is based on open-source software. It is critical that these risks are addressed contractually.
Failing to address these points upfront can lead to IP related disputes down the line or could also trigger third party IP claims against the organisation.
- AI Bias and Discrimination
AI bias and discrimination is a growing concern globally and as such AI POCs such as those involving recruitment, credit scoring, or insurance run the risk of breaching equality laws if an AI algorithm produces biased outcomes based on race, gender, or disability even if this is done unintentionally. Organisations should consider this when developing an AI POC to avoid breaching laws or the risk of public backlash.
- Contractual and Commercial Considerations
Even POC stage, it is critical that legal input is solicited and that appropriately drafted POC agreement is put in place. Ultimately the success of a POC rests on the nature and detail of the contract concluded between the parties.
Non-Legal Risks: Practical Pitfalls to Anticipate
- Poor Data Quality: Even the most sophisticated model can fail due to incomplete, outdated, or unstructured data. A POC that "fails" may not reflect the model's potential but only the dataset's flaws.
- Unrealistic Expectations: Many stakeholders view AI as a "magic bullet". Legal teams should temper this by insisting on defined success metrics before the project begins.
- Change Resistance: Employees may resist AI tools out of fear that automation means redundancy. It is critical that organisations involve expert labour lawyers and HR practitioners to mitigate against potential risks of job displacement, staff reskilling and change management.
- Reputational Risk: Utilising certain AI tools can trigger serious consequences to a company's reputation, e.g. an AI tool that unfairly rejects job applicants or misclassifies customers can damage public trust, even if it was only a trial. This has been experienced by tech companies and can impact the image of a company and ultimately lead to customer defection.
Conclusion: POCs deserve legal oversight
AI POCs are not exempt from regulatory scrutiny. If personal data is processed, if decisions are made that affect individuals (for example, AI hiring software), or if commercial partnerships are involved, legal teams must have a seat at the table from day one. The cost of ignoring these risks is often greater than the cost of early legal involvement and a choice has to be made between once off legal fees, or court battles, and lawsuits that can drag on for years.
Need help vetting your next AI initiative?
As AI continues to reshape the regulatory and commercial landscape, legal insight is not optional, it is strategic. Organisations seeking to embark on a POC should solicit expert legal advice before embarking on a POC and once the POC is concluded.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.