- with Senior Company Executives, HR and Finance and Tax Executives
- in Canada
- with readers working within the Technology and Property industries
In a matter of months, generative artificial intelligence (AI) has shifted from an emerging technology to a mainstream business tool, with applications such as ChatGPT driving widespread adoption across industries. As organizations increasingly experiment with AI to enhance efficiency and decision-making, regulators around the world are working to balance innovation with accountability.
The rapid pace of deployment raises complex questions about governance, oversight and, critically, liability. When an algorithm makes a mistake, who is responsible – the developer or the deployer? This article examines how existing and proposed legal frameworks address that question and how Canadian law is adapting to the realities of autonomous decision-making.
A. (Lack of) AI regulation
Canada's proposed Artificial Intelligence and Data Act (AIDA), which was once expected to become the country's first comprehensive AI law, has effectively died following the recent federal election. As of December 2025, Canada does not have a single, overarching statute dedicated to the regulation of AI. In the absence of comprehensive AI legislation, oversight of AI technologies has fallen to existing laws, particularly privacy statutes, which now serve as an imperfect proxy for AI regulation.
The Honourable Evan Solomon, Canada's first Minister of Artificial Intelligence and Digital Innovation, recently emphasized the federal government's continued commitment to fostering innovation, noting that "AI is a powerful driver of growth, productivity and innovation. It's key to making sure Canada not only competes – but leads – on the global stage."
While the creation of a dedicated ministry signals Canada's recognition of AI's transformative potential, the absence of a clear legislative framework leaves legal professionals and policymakers to rely on a patchwork of existing statutes to address emerging risks. Until a comprehensive regulation is enacted, questions of accountability, privacy and ethical use will continue to be addressed by laws not designed for the age of AI.
B. Liability from the use of AI
As AI technology transforms industries (from finance and healthcare to legal services and beyond), organizations are increasingly grappling with how traditional doctrines of negligence, product liability and fiduciary duty apply to the deployment and use of AI. Organizations deploying AI can generally expect to bear responsibility for the outcomes of its use, underscoring the importance of careful governance, risk management and contractual safeguards.
In Moffatt v. Air Canada, 2024 BCCRT 149, a customer seeking bereavement fares relied on an AI chatbot embedded in Air Canada's website and was incorrectly informed that discounted tickets were available. Acting on this misinformation, the customer purchased tickets at full price. When the error was later raised at the BC Civil Resolution Tribunal, Air Canada disclaimed responsibility, contending the chatbot operated independently and that it could not be held liable for its "agent's" representations. The BC Civil Resolution Tribunal rejected Air Canada's defence and found the airline liable for negligent misrepresentation. Emphasizing that all content on a website – whether static or generated interactively by AI – falls under the company's responsibility, the Tribunal awarded the customer $812.02 in damages. The decision underscores that an organization cannot escape liability by characterizing AI technology as autonomous entities detached from their own operations. In contexts where AI makes incorrect predictions or decisions (such as misinterpreting data or failing to perform a designated task) legal liability may ultimately rest with the deployer of the AI technology. For example, if an AI system fails to identify a risk or carries out an action incorrectly on behalf of a professional or an organization, a claim in negligence could arise against that individual or the entity that deployed or contracted for the AI system.
In another case1, the Information and Privacy Commissioner of Ontario found that McMaster University's use of an AI-enabled proctoring tool breached the privacy statute applicable to universities, namely the Freedom of Information and Protection of Privacy Act. The University failed to provide adequate notice to students about the personal information being captured during the proctoring of an exam, ranging from video recordings to behavioural analytics, and its contract with the vendor lacked sufficient safeguards to protect that data. As a result, the regulatory body governing McMaster's compliance with the applicable privacy law ordered McMaster to amend its notice provisions and strengthen contractual and technical controls.
This decision underscores that organizations deploying AI must rigorously assess the nature and scope of any data collected, ensure that end-users receive clear, informed notice of collection purposes, and embed robust privacy and security measures into both vendor agreements and system design. Ultimately, liability for non-compliance with privacy law rests with the AI's deployer. Even when a third party develops or hosts the technology, the deployer must confirm that all statutory requirements – notice, consent, purpose limitation and data protection – are met, or face regulatory enforcement and reputational harm.
To answer the question posed at the beginning of this article, the limited case law seems to suggest that when an algorithm makes a mistake, responsibility ultimately rests with the organization that deploys the technology. Legal liability cannot be outsourced to technology vendors. To manage this risk, organizations must conduct thorough due diligence, incorporate clear and enforceable privacy and performance obligations in vendor agreements, secure comprehensive indemnities, maintain audit and termination rights, and continuously monitor AI systems to ensure compliance with applicable laws and regulatory expectations.
C. Contracting for AI
To mitigate some of the risks discussed above, organizations should thoroughly examine AI-vendor contracts to ensure that critical issues (e.g., privacy, data governance, regulatory compliance, liability allocation, performance guarantees and intellectual-property rights) are addressed in clear, enforceable terms.
When contracting for AI solutions, purchasers must ensure the agreement clearly addresses data ownership and privacy obligations, delineates intellectual property rights (including any co-development or model-enhancement outputs), and appropriately allocates liability (through indemnities, insurance requirements and damage caps) for algorithmic errors or data breaches. Equally important are performance guarantees and service-level commitments that bind the vendor to accuracy thresholds, and uptime metrics. There should be clauses that require the AI vendor to conduct bias audits, maintain transparency and explainability, and adhere to applicable sector-specific regulations. Governance provisions (such as audit rights, change-management protocols and exit-transition plans) help safeguard operational continuity, data integrity and legal compliance. By embedding these critical terms in vendor-deployer contracts, organizations can manage operational and legal risk, maintain stakeholder trust and establish a strong contractual foundation for deploying AI responsibly. Negotiating robust vendor agreements is no longer optional. An AI solution is not just software, it intersects with privacy, regulatory oversight, ethical obligations and evolving liability landscapes. The following is a summary of key contract terms for organizations contracting with AI vendors to consider:
- Intellectual property ownership and licensing: When AI technology generates content, the ownership of the outputs generated by AI can be unclear. Traditional Canadian intellectual property law typically grants ownership to human creators of a work. However, if AI is autonomous in creating content without direct human intervention, questions arise about who holds the rights in the work. In many jurisdictions, intellectual property laws do not recognize AI as a legal creator or owner of a work, so the owner may in fact be the developer of AI technology or the user that directed the AI to create the work. This has yet to be decided in Canada. AI vendors often guard their algorithms and training data as proprietary assets, however, organizations utilizing AI technology must ensure proper ownership or the right to use the data generated by the target AI vendor. If AI vendors use data without an express license from the owner of the data, using the AI to generate content could lead to claims of copyright infringement against the purchaser (the deployer) or user of the AI technology. When deploying the use of AI, it is critical for entities to ensure that its vendor is compliant with all applicable laws when providing the AI services.
- License scope and duration. Clarify the scope and duration of the licence for use of the AI model and all derived outputs.
- Ownership of improvements. Clarify who owns modifications, customizations or new models developed jointly during deployment.
- Open-source components. Identify any third-party open-source libraries and ensure licensing terms (e.g., GPL, Apache) do not inadvertently impose copyleft obligations.
- Data-driven intellectual property. Address whether data contributions from the end-user or the deployer will be used by the vendor to refine or train future products.
- Data privacy and security obligations: An AI contract must clearly allocate responsibility for:
- Data ownership and stewardship. Does the end-user or the deployer retain full ownership of raw and de-identified data? Are vendor claims of "anonymization" aligned with Canadian legal standards?
- Consent management. Who obtains consent for secondary uses of personal data (e.g., model training)? Contracts should require vendors to support audit trails of consent and revocation requests.
- Security controls. The vendor must implement encryption in transit and at rest, multi-factor authentication for administrative access, and timely breach notification protocols consistent with federal and provincial timelines.
- Cross-border transfers. If AI model training or hosting uses servers outside Canada, ensure the contract restricts unauthorized international transfers or subjects them to equivalent safeguards.
- Regulatory compliance and certification: Organizations should ensure that their AI vendor contracts require ongoing compliance with all applicable laws, industry standards and certification requirements relevant to the deployment and use of AI technologies.
- Liability, indemnity and insurance: AI decisions can impact outcomes, so contract language on risk allocation is paramount:
- Liability caps. Negotiate caps on damages that reflect the end-user or the deployer's exposure, for instance, proportional to contract value, while carving out exceptions for gross negligence or wilful misconduct.
- Professional indemnity. Require the vendor to indemnify the end-user and the deployer for third-party claims arising from AI recommendations, data breaches or intellectual property infringement.
- Insurance requirements. Mandate minimum levels of cyber-liability, errors and omissions, and medical malpractice insurance, with the deployer named as an additional insured and primary beneficiary.
- Performance, Service Level Agreements (SLA), and lifecycle management: AI performance can degrade over time due to data drift or changes in clinical practice. To account for this, contracts with AI vendors should include:
- SLA. Define uptime guarantees, maximum latency for real-time decision support and ticket resolution timelines.
- Model retraining and validation. Establish triggers – such as performance dropping below X% accuracy – requiring the vendor to retrain or recalibrate the model at its own cost.
- Version control and change management. Require a governance process for deploying new model versions, with rollback rights if adverse clinical trends emerge.
- Exit-scenario continuity. Ensure transition-period support so that customer support is not disrupted if the contract terminates prematurely.
- Transparency, explainability and ethical safeguards: Trust in AI hinges on understanding its outputs. Contracts must address:
- Explainability commitments. Require AI vendors to provide interpretable explanations of AI recommendations, tailored for clinicians and auditors.
- Bias testing and audit rights. Obligate regular fairness assessments across demographic groups and grant the deployer audit access to bias-testing reports.
- Ethical use covenants. Ban uses of the AI technology for purposes beyond agreed-upon clinical workflows (e.g., insurance underwriting).
- Training, change management and governance: As even the most accurate AI technology fails without proper end-user adoption, it is best practice for organizations using AI technology to include the following contractual terms in its contracts with AI vendors:
- Comprehensive training. Oblige the vendor to deliver role-based training curricula for clinicians, IT staff and administrators, with refresher sessions as models evolve.
- Joint governance committee. Create a steering committee staffed by the deployer and the vendor representatives to oversee deployment, escalate risks and approve major updates.
- Term, termination and transition: A well-drafted exit plan ensures organizations employing AI technology has the following rights:
- Termination rights. Include termination for cause (e.g., material breach, regulatory lapse) and, where feasible, termination for convenience with reasonable notice and wind-down support.
- Data return or destruction. Specify formats and timelines for returning or securely destroying both end-user data and derivative model artifacts.
- Post-termination support. Negotiate a transitional services period, typically 90 to 180 days, during which the vendor continues hosting, support and data access at pre-agreed rates.
Negotiating AI vendor contracts demands meticulous attention to privacy, regulation, intellectual property, liability, performance and ethics. Each clause shapes the balance between harnessing AI's transformative potential and ensuring the responsible use of AI.
D. Conclusion
As AI technology reshapes the Canadian business landscape, the need has never been greater for organizations to establish a resilient legal framework governing its use. The regulation of AI in Canada remains a patchwork of outdated statutes and non-binding policy principles. Organizations adopting AI must take a cautious and an informed approach, maximizing the benefits of AI and data-driven insights while mitigating the inherent risks. Although the law often lags behind technological innovation, Canadian legal frameworks are beginning to grapple with the novel challenges introduced by AI. Questions of liability once reserved for human actors now extend to AI, compelling lawyers, policymakers and industry leaders to revisit long-standing legal doctrines through a modern lens.
As AI technologies continue to permeate business settings, stakeholders must prioritize transparency, accountability and ethical considerations. It is critical that organizations using AI technology ensure that any vendor it contracts with for the use of AI technology adequately complies with all applicable laws and regulations, because ultimately, liability may fall on the deployer of the technology in the event of non-compliance.
In addition to the contracting best practices, organizations that are using (or are planning to use) AI and its various tools and models should consider taking the following steps:
- Build a principle- and risk-based AI compliance framework that can evolve with the technology and regulatory landscape. The framework should be built with input from both internal and external stakeholders.
- Part of the framework should set out clear guidelines around the responsible, transparent and ethical use of AI technology.
- Conduct a privacy and an ethics impact assessment for the use
of new AI technology. The assessment should answer the following
questions:
- What type of personal information will the AI technology collect, use and disclose?
- How will the personal information be used by the AI technology?
- Will the data set lead to any biases in the output?
- What risks are associated with the AI technology's collection, use and disclosure of the personal information?
- Will there be any human involvement in the decision-making?
In an era where AI increasingly influences business decisions, the legal and ethical responsibilities surrounding their use cannot be an afterthought. While Canada's regulatory framework for AI remains fragmented, organizations that act now to build robust governance structures and contract safeguards will be better positioned to navigate the evolving landscape. The path forward lies not only in compliance, but also in cultivating a culture of accountability and transparency that keeps pace with technological change.
Footnote
1. Information and Privacy Commissioner of Ontario. Privacy Complaint PI21-0001, 2024.
Originally published by Lexpert.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.