ARTICLE
5 November 2025

The Complexities And Costs Associated With Integrating AI Into Existing Systems

WW
Webber Wentzel

Contributor

Webber Wentzel's team of experienced advisors provide multi-disciplinary legal and tax services to clients operating in and across the African continent. Whether expanding into Africa or growing operations across the continent or globally, Webber Wentzel offers exceptional client service and has an outstanding track record of working on some of the most sustainable and transformative matters in Africa.

Artificial Intelligence (AI) is no longer a distant concept; it has become an integral part of modern life and business.
South Africa Technology
Webber Wentzel are most popular:
  • within Tax, Food, Drugs, Healthcare and Life Sciences topic(s)
  • with readers working within the Metals & Mining industries

Artificial Intelligence (AI) is no longer a distant concept; it has become an integral part of modern life and business. The choice for businesses is straightforward: ignore it and risk falling behind or embrace it and stay ahead. The real challenge, however, lies in determining how to successfully integrate AI into existing company systems. While legal risks are inherent in any AI implementation, when deployed correctly with appropriate safeguards, AI systems can transcend their role as mere tools to become strategic enablers of enhanced operational efficiency and sustainable long-term growth.

The starting point for any AI implementation is defining the fundamental question: why does your organisation require AI integration, and what specific role should it fulfil within your operational framework? Whether the goal is to help employees summarise lengthy documents, assist with research, or proofread and improve the quality of drafts, defining the "why" is essential. Only once these objectives are clearly defined can organisations design and implement AI solutions that align with their specific operational requirements.

Regardless of the implementation methodology adopted, integrating AI systems into existing business environments presents exceptional complexity that extends far beyond basic system integration. While the potential benefits are significant, organisations must navigate numerous challenges before deploying AI systems. Of primary concern, where AI systems are designed to process client documents or generate content summaries, organisations must implement robust safeguards against data breaches and confidentiality violations. This encompasses comprehensive staff training on appropriate handling of sensitive information and ensuring full compliance with applicable data protection legislation, including the Protection of Personal Information Act.

Cost considerations represent another critical factor in AI deployment. Implementation typically involves substantial ongoing financial commitments rather than one-time expenditures. Licensing fees, infrastructure upgrades, and system integration costs can rapidly escalate into significant recurring operational expenses. Beyond initial deployment, organisations must invest in comprehensive employee training to ensure effective and responsible system utilisation. Additionally, dedicated technical teams are typically required for ongoing system monitoring, troubleshooting, and management of operational challenges and emerging risks.

Legal exposure and accountability

The deployment of AI systems presents significant legal and regulatory risks, particularly when such systems generate outputs based on external datasets. AI models are typically trained on vast arrays of third-party content which may include copyrighted, trade-marked, or otherwise protected materials. Without firm governance, this can lead to the unintended use or replication of intellectual property, exposing organisations to infringement claims, regulatory scrutiny, and reputational harm.

These risks are amplified in high-stakes environments – such as legal drafting, marketing, and product innovation – where AI-generated outputs may incorporate or summarise client materials, proprietary datasets, or sensitive information. Even unintentional incorporation of protected content can constitute legal infringement, particularly where outputs resemble derivative works.

Some of these pressure points are predictable. Training data is not always transparent. Audit trails may not show what informed a specific passage. It is not always clear whether an output could be treated as a derivative work. Contracts can also be vague about who carries liability for infringement, data breaches, or POPIA non-compliance.

The consequences are real – interdicts, damages, takedown demands, regulatory action, contractual disputes, and reputational harm.

These risks can be effectively managed through the implementation of disciplined practices and robust governance frameworks:

  • Establish comprehensive policies. Implement a written AI-use policy that clearly defines approved use cases, prohibited applications, and circumstances requiring human review and oversight.
  • Deploy controlled environments. Ensure that confidential or personal data is excluded from consumer-grade tools and implement compliant enterprise solutions with comprehensive logging and access controls.
  • Implement secure architecture. Utilise retrieval-augmented generation systems that draw exclusively from vetted, licensed sources rather than uncontrolled internet content.
  • Maintain comprehensive traceability. Preserve detailed logs of all prompts and outputs to ensure accountability and enable explanation of actions taken and responsible parties.
  • Contractual safeguards. Secure comprehensive intellectual property indemnities from AI providers. Prohibit training on your data. Require explicit data-processing commitments and data localisation, along with rapid incident notification procedures and flow-down obligations for sub-processors. Establish liability caps that appropriately reflect your risk exposure.
  • Regulated industries. In finance, healthcare, or telecommunications, implement enhanced due diligence procedures, comprehensive record-keeping requirements, and robust model-risk governance frameworks.

When implemented with these safeguards, AI can be deployed with greater confidence while maintaining appropriate legal and reputational risk management.

Finally, there are the less visible but equally significant opportunity costs. Businesses may invest heavily in AI with the expectation of long-term savings or efficiency gains, only to discover that the solution is misaligned with their strategic objectives or incompatible with their existing technology and infrastructure. If the AI fails to deliver meaningful value or integrate effectively with core operations, the investment becomes a sunk cost. This is why selecting the right AI platform, and implementing appropriate controls when doing so, is not merely important but essential.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More