- with Senior Company Executives, HR and Inhouse Counsel
- with readers working within the Business & Consumer Services, Healthcare and Transport industries
The rapid adoption of artificial intelligence is reshaping how businesses innovate and operate, but it also introduces new and complex legal, ethical, and compliance considerations.
In this FAQ, Fasken's Technology group addresses common questions that arise in the course of developing, procuring, using, and ensuring the legal compliance of AI systems. Our goal is to help both the developers and providers of AI technologies, as well as the organizations that procure and use them, understand key issues concerning the adoption of AI and future-proof their AI strategies.
For more information or to discuss a particular matter, please contact us.
FREQUENTLY ASKED QUESTIONS:
Considerations for Developers and Providers of AI Systems:
- Do I need consent from my customer to use their data to develop and train AI systems?
Yes, you generally need explicit consent from your customers to use their data to develop and train AI systems. Typically, this consent is obtained through the grant of a right or license to use the data for this purpose in your contract with the customer. If that data includes personal information, consent must be informed and specific. Contractual clarity is essential to avoid disputes and regulatory exposure. Absent the express right to use customer data to develop and train an AI system, using customer data for this purpose could result in a breach of privacy laws or confidentiality obligations, or impact your ability to asset clear ownership of the AI system.
- What needs to be done to turn personal data into anonymized data?
To anonymize data, you must remove or alter identifiers existing in that data in a way that prevents the data from being useable to identify a person, even when cross-referenced with other datasets. The obligation to anonymize data often extends beyond personal data and privacy requirements; customers frequently expect that data that could be used to identify their organization (or even any organization) should also be anonymized. Anonymization can be accomplished through techniques such as data masking, aggregation with other datasets, and differential privacy. Importantly, anonymization must be irreversible and meet the legal standard under applicable privacy laws. Simply removing names or emails is not sufficient if the remaining data can still be used to identify an individual.
- Can I use anonymized data generated using my customer's data for any purpose?
Once data is truly anonymized, it is generally no longer subject to privacy laws and may be used for broader purposes, including model training, analytics, and commercialization. However, you should ensure that your contracts with customers permit the transformation and use of their data in this way. Some contracts restrict the use of derivative data generated from their data, assign to the customer ownership of such derivative data, or impose confidentiality obligations that may still apply even after anonymization.
- What should a developer of AI systems do to future-proof their work against emerging AI regulations?
Although the Artificial Intelligence and Data Act (AIDA) was shelved in early 2025, its principles of risk-based governance, transparency, and accountability continue to influence voluntary frameworks and provincial initiatives in Canada. Developers should implement internal governance structures, conduct risk assessments for high-impact use cases, and maintain documentation of training data and model behavior, including to ensure the model is free from discriminatory bias. Adopting voluntary codes (such as ISED's Generative AI Code of Conduct) or achieving compliance with international AI standards (such as ISO/IEC 42001, ISO/IEC 23894 or the NIST AI Risk Management Framework) can help demonstrate proactive compliance, build trust with customers, and increase the likelihood that internal practices (and the AI systems themselves) will not need to be significantly adjusted in the event Canada re-introduces a bill to regulate AI.
- Can I use publicly available data on the web or other online channels to train my AI solution?
Using publicly available data is not automatically lawful for AI training. Copyright and privacy laws, as well as contractual obligations, may restrict such use. For example, scraping data from a website may breach the terms of use or terms of service applicable to your use of that website. Developers should assess the legal status of the data source and consider licensing arrangements or alternatively using open source datasets that are made available for unrestricted public use, including for AI training purposes.
- What measures should I take to protect sensitive customer data when using AI?
You should implement robust data security practices, including data encryption, access controls, audit trails, and secure environments for model training. If third-party AI systems are utilized, you should ensure that the third party providers of those AI systems are also contractually bound to handle the data being processed appropriately and are not utilizing the data for purposes that go beyond the data use rights your customer granted to you. Where you are processing particularly sensitive data, it is typical for customers to require that their data be logically and/or physically separated from other customer datasets.
Considerations for Organizations Procuring and Using AI Systems:
- To what extent is our organization responsible for the third-party AI tools that we procure and use?
Your use of AI tools may trigger responsibility for compliance with existing laws and regulations, including those relating to privacy, employment and human rights, intellectual property and negligence. Liability may arise, for example, if the AI tool causes harm, infringes rights, or processes personal or other data unlawfully. Your organisation may also be subject to government- or industry-specific codes, policies or other guidance in respect of such use. Due diligence, contractual safeguards, and ongoing monitoring are essential to mitigate risk. Among other things, you should ensure that your upstream and downstream agreements include clear obligations, warranties, audit rights and indemnities to manage exposure.
- If I input business information into a third-party AI platform for analysis or content generation, will this data be stored and reused by the third-party provider?
AI platforms may reserve the right to retain and use inputted data to improve their models. This could result in a loss of confidentiality, trade secret or intellectual property protections, potentially resulting in a loss of competitive advantage. Before inputting sensitive business information, you should review the provider's terms of service and privacy policy, and consider negotiating data usage limitations or using platforms that provide more favourable customer data protection terms.
- I would like to use AI to optimize my HR department. Are there special concerns I should be aware of?
Yes, AI use in HR raises heightened legal and ethical concerns, particularly around privacy, bias and transparency. Tools used for recruitment, performance evaluation or workforce analytics must comply with human rights legislation and privacy laws, including consent and fairness requirements. You should validate that the AI systems are explainable, auditable and free from discriminatory bias, and ensure that employees are informed, to the extent required under applicable law, about how their data is being used.
- My website uses a AI chatbot for customer support. Am I liable for the answers provided by the chatbot?
Your organization may be held liable for misleading, inaccurate or harmful information provided by an AI-powered chatbot, especially if it affects the customer's decision-making or otherwise causes the customer harm. Chatbots should be carefully trained, monitored, and clearly identified as automated tools. You should implement disclaimers, escalation protocols and regular audits to reduce legal exposure.
- How can I use third-party AI tools safely without compromising data security?
Safe AI use requires a combination of technical safeguards, contractual controls, and governance practices. Choose vendors with robust security certifications and standards, including data encryption and access controls. Limit the input of sensitive or identifiable data unless necessary, and ensure that vendor's data handling practices align with your internal policies and legal obligations.
Intellectual Property Considerations:
- If I use a generative AI platform to create content for my business, do I own the rights to this content?
The contractual terms of most generative AI services state that each user owns what they generate. Be careful, however, as some services require that you pay higher fees for uses above certain thresholds or for any business use. The fact that a contract indicates that you own intellectual property rights does not mean that you actually do. Laws regarding intellectual property govern this, not contracts. If your personal contribution to the content is modest, for example, if it is limited to a few ideas formulated in a prompt, a Canadian Court may conclude that there is no copyright protection for the AI generated content. This is because mere ideas are unprotected and you did not actually contribute to the expression of the work. The AI model made all expressive contribution in this example, and it may very well be that human contribution to the expression of the work is mandatory. This is less of a concern for trademark rights as ownership of trademarks is primarily tied to first use or registration, regardless of how they are created. Please keep a proper record of your own and your coworkers' contribution to the process. This could include sequences of prompts used to generate content, as well as steps you take to alter the content after the generative AI tool creates it. You will want to ensure that there is significant human input in any content that is key to the value of your business.
- Should I be concerned about how my suppliers use AI to develop content for use in my business?
Yes, you should. As indicated above, there is a risk that no intellectual property rights arise in content generated by generative AI platforms. You should include in your contracts terms that force your supplier to disclose whether or not they will be using generative AI and give you an opportunity to refuse this. This will give you an opportunity to assess whether such use risks depriving you of intellectual property rights in key assets due to a lack of human contribution. This may be an important consideration for some content, but not for all content. Moreover, there could be a risk that a generative AI platform creates content that mimics other people's creations, which could lead to infringement claims. The fact that you were not aware of the infringement may not be enough to dispense you from liability. Knowing that this type of tool will be used will help you be more vigilant and understand what value you are getting in exchange for the amounts you are paying.
- I have developed an innovative AI-based system. Can it be patented?
Many applicants have been securing protection for innovations that use artificial intelligence in one form or another. There have been some challenges historically in Canada for securing computer-based inventions and this has been especially apparent in some sectors, such as for IT-based methods of doing business. The same would be true for some AI-based inventions. Generally speaking, many parties have been securing patent protection for AI-based systems for a number of years to the extent they are new, useful and non-obvious. Successful patent applications directed to AI-based systems typically provide a detailed description of the technical problem faced, and provide a technical solution to the technical problem. If an AI-based system is innovative as compared to other offerings available in the market, it is worth assessing whether it could be patented prior to releasing details publicly. If a patent application is filed after public release, it is still possible to patent it in Canada provided that no more than 12 months have expired, but in many foreign countries protection will be lost. In light of this, having a discussion with a patent advisor before revealing details of your innovation publicly would be a wise move.
- If I use an AI tool to help develop new inventions, such as identifying compounds to solve an industrial problem, will I be able to patent these inventions?
It is possible that patent protection may be available. Do note that there are judicial decisions in Canada that clarify that an inventor listed on a patent application must be a "person". It may be difficult to assert that an "AI tool" is a person from a legal perspective and thus entitled to be an "inventor" (which include a human and a corporation, for example). However, where the conception of an idea is done by a human, and an AI tool is used to confirm or provide experimental data, there may still be a strong argument that the human fulfills the role as "inventor". This is the position that has been taken by the Commissioner of Patents in mid-2025. This makes it challenging to claim that contributions by a generative AI tool to inventive concepts, here the selection of a compound to solve the problem, are protectable. Nevertheless, if the combined efforts of a human and an AI tool give rise to a patentable invention, then there is a possibility of securing protection. Using an AI tool does not dispense from complying with the traditional requirements for securing patent protection, such as that the development must be novel and non-obvious in light of the prior art. If the AI tool is merely restating prior publicly available documentation describing possible uses for chemical compounds, its contribution may carry little weight.
- What steps can I take to prevent an AI system from generating content that infringes on my patents/copyrights/trademarks?
Patents are publicly available documents that are accessible online from patent offices and other sources. It would be difficult to prevent the training of an AI system on these documents to generate output. The same applies to trademarks, which are also listed in publicly available databases and are also normally widely used online. If you believe a third party infringes your rights, you will need to take legal means to obtain redress, starting with preparing and sending a demand letter. For copyrighted content that is available online, it may be helpful to include user terms and conditions that prohibit the use of the content for AI training purposes. This could be helpful to prevent others from generating content derived from your own creations using AI. Notices to a similar effect on physical copies may also have an impact, although it may be more difficult to make the case that they are binding contractual terms.
If you control or input over an AI system's training and programming, selectively choosing the data used to train the AI could be a way to mitigate the risk of intellectual property and contractual claims. If not, prevention becomes more difficult.
Regulatory Considerations:
- Are there any AI-specific laws in Canada?
Canada does not have a comprehensive artificial intelligence ("AI") law in force at the federal level. The federal government proposed the Artificial Intelligence and Data Act as part of Bill C-27 (the Digital Charter Implementation Act, 2022), which died on the order paper. No province has yet proposed its own AI law for the private sector. Consequently, AI in Canada's private sector is regulated by general purpose laws—privacy, consumer protection, human rights—and by sector specific regulations.
Regarding the public sector, Ontario has passed legislation regarding the use of AI in the public sector. The Canadian federal government has also established a Directive on Automated Decision Making.
- How do Canadian privacy laws apply to the use of AI in Canada?
Canadian private-sector privacy laws, including the federal Personal Information Protection and Electronic Documents Act (PIPEDA), apply to organizations and their use of AI systems that collect and use personal information in the context of commercial activities (PIPEDA) or within the provinces that have established private-sector privacy laws (Alberta, BC, and Québec). In general, these laws require obtaining valid and meaningful consent from individuals, ensuring accountability, maintaining transparency, minimizing data collection, ensuring accuracy, and implementing appropriate security measures.
Québec's privacy law requires organizations to inform individuals when a decision about them is made exclusively through an automated process and to provide information about the technology and the factors that led to the decision, as well as an opportunity to have such decisions reviewed by a human.
- How does Canadian law address issues of bias in connection with AI?
Issues of AI bias and discrimination are addressed primarily through existing human rights laws such as the federal Canadian Human Rights Act and similar provincial legislation. If an AI system's outputs or decisions result in discrimination on a prohibited ground (such as race, sex, age, or disability), it can violate human rights laws. These laws apply to AI-enabled decision-making just as they do to human decision-making, prohibiting practices that unlawfully discriminate on prohibited grounds. Further, the Charter of Rights and Freedoms (the "Charter") may apply if a public body's use of AI or legislation results in a violation of Charter rights. Canadian privacy regulators have stated that using personal information in a way that creates the risk of discrimination on prohibited grounds is forbidden, regardless of individual consent.
- Do the laws governing AI use apply equally to the private and public sectors?
Generally, no. In Canada, typically the laws that apply to the public sector are distinct from those that apply to the private sector. Private sector AI use is governed mainly by laws of general applicability such as privacy, intellectual property, and consumer protection laws, whereas public sector AI use is subject to public law duties, including those required under the Charter and administrative law principles.
- Are there any industries that have their own specific laws on the use of AI systems?
As of October 2025, there are no AI-specific laws in Canada for the private sector. However, financial services and capital markets regulators have issued several guidelines regarding the use of AI:
- The Office of the Superintendent of Financial Institutions (OSFI) issued Guideline E-23 – Model Risk Management in September 2025, which sets out the OSFI's expectations for effective management of risks arising from the use of AI or machine learning models effective May 1, 2027.
- Québec's Autorité des marchés financiers issued the Guideline for the Use of Artificial Intelligence in June 2025, recommending organizations to adopt a risk-based approach to manage the use of AI systems in Québec's financial services sector.
- Canadian Securities Administrators issued Staff Notice and Consultation 11-348: Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets in December 2024, which addresses key considerations for registrants, issuers, marketplaces, and other market participants that may leverage AI systems and highlights the importance of maintaining transparency, accountability, and risk management.
- The British Columbia Financial Services Authority issued the Artificial Intelligence Guideline for real estate professionals using AI in February 2024.
- Are there any specific rules on the use of AI systems in the health sector?
For the use of AI systems in the health sector, organizations should pay particular attention to health privacy laws such as Ontario's Personal Health Information Protection Act. Personal health information is sensitive personal information, and organizations developing AI systems should ensure to meet the requirements set by privacy laws and regulations. Finally, there are guidelines such as Pan-Canadian AI for Health Guiding Principles and Health Canada's Pre-market Guidance for Machine Learning-Enabled Medical Devices.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]