- in European Union
- within Insolvency/Bankruptcy/Re-Structuring and Antitrust/Competition Law topic(s)
Accessibility Standards Canada has published the first edition of the National Standard of Canada CAN-ASC-6.2:2025, Accessible and Equitable Artificial Intelligence Systems. The standard sets a process-and-outcomes framework to ensure AI systems are accessible and equitable for people with disabilities across the AI lifecycle, and it is notable for the breadth and specificity of organizational obligations it places on entities that develop, procure, deploy, and oversee AI systems. Although voluntary (i.e. not legally enforceable by any authority in Canada), the standard can be used for conformity assessment and may quickly become a market baseline, particularly for federally regulated entities and organizations selling into Canadian public and private sectors. As such, it is important that companies (both within and without Canada) are aware of this standard.
What the Standard Is and Who It Affects
The CAN-ASC-6.2:2025 standard (the "Standard") is a National Standard of Canada developed by Accessibility Standards Canada ("ASC") through an consensus process. It is designed to drive "highest level" accessibility and equity outcomes, aligning with the Accessible Canada Act and relevant privacy laws, and referencing international frameworks and standards. The standard is published as a voluntary instrument that can be recommended to the responsible Minister, applied by federally regulated entities, and used for certification and procurement criteria.
While the Standard is voluntary and ASC has no enforcement power, organizations are advised to confirm if any applicable law, regulation, inspection plan, or certification program makes application mandatory - for example, requiring compliance with certain industry standards that would make implementation of the Standard a prerequisite. The ASC notice underscores that users must verify legal obligations and suitability for their purposes and consult applicable federal, provincial, and municipal laws.
The standard applies to organizations creating, procuring, deploying, customizing, governing, or monitoring AI systems, and it defines "AI systems" broadly to include technologies that use data to perform tasks, recognize patterns, make decisions or predictions, or create content. It adds detailed definitions for accountability, transparency, equity, cumulative harm, statistical discrimination, and related concepts, which inform how obligations are assessed in practice.
Core Obligations for AI Developers and Deployers
The standard establishes four pillars: accessible AI, equitable AI, organizational processes supporting accessibility and equity, and AI education/literacy. It emphasizes the full participation of people with disabilities across all AI lifecycle roles, and it requires accessible processes, tools, outputs, documentation, and feedback mechanisms. Accessibility obligations incorporate explicit conformance to CAN-ASC-EN 301 549:2024 for tools, outputs, and documentation, and to CAN-ASC-3.1:2025 for plain language transparency materials.
Equity obligations require preventing underrepresentation and misrepresentation in training data; validating and tuning for equitable performance and reporting disaggregated metrics; continuously monitoring real-world impacts; and assessing and mitigating harms, including cumulative and context-specific harms that may disproportionately affect people with disabilities as statistical minorities or outliers. Organizations must avoid discriminatory use of surveillance, biometric categorization, emotion analysis, and predictive policing directed at people with disabilities, and must ensure informed consent and access to equivalent alternatives where AI is used in decision-making.
Organizational process obligations are extensive. They include governance with participation by people with disabilities, planning that embeds human oversight and alternatives to AI, equitable risk assessments not anchored solely on majority outcomes, public notice of intent to use AI, appropriateness assessments of datasets, design and development practices that engage and compensate people with disabilities, procurement verification by neutral accessibility and disability equity experts, customization testing, ongoing impact assessments with a public registry of harms and contested decisions, staff training, accessible transparency and consent mechanisms, provision of equivalent non‑AI or human‑oversight alternatives, and data management consistent with Canadian privacy laws.
The standard also requires accessible AI literacy and training for personnel, developed and delivered in collaboration with people with disabilities, and tailored to include privacy, UI accessibility, harm and risk detection, bias mitigation, and inclusion practices across the lifecycle.
Key Requirements Likely to Become Procurement and Market Baselines
Thought the Standard is not currently mandatory, the elements which it sets forth are likely to become baseline industry standards, which will impact the "reasonableness" of corporate AI policies and implementation. A number of these elements mirror similar recommendations in other countries.
Public transparency and notice. Organizations must publish accessible, plain‑language information about AI functions, data used, decision logic, risks, alternatives, and accountable contacts before deployment and maintain it current. They must also publicly disclose intention to use AI in accessibility plans and provide accessible channels for input and feedback.
Human alternatives and contestability. Organizations must offer equally effective, timely, and convenient non‑AI options or AI with direct human oversight, and provide accessible information on how to correct, contest, change, or reverse AI‑assisted decisions.
Public registry of harms and contested decisions. A public, accessible registry documenting harms, barriers, and inequitable treatment related to AI systems is required, with privacy safeguards and eventual submission to any centralized system once established.
Independent verification. Before acquisition, conformance to accessibility and equity criteria must be verified by neutral experts. Contracts should include termination provisions if accessibility or equity performance degrades.
Data appropriateness and privacy compliance. Dataset appropriateness must be assessed per use, with involvement of people with disabilities, and data storage/management must comply with the Privacy Act, PIPEDA, and other applicable privacy laws, including robust de‑identification to prevent re‑identification.
Restrictions on surveillance analytics. The standard requires refraining from discriminatory surveillance, biometric categorization, emotion analysis, and predictive policing targeting people with disabilities, and mandates planning/design controls to prevent misuse or manipulation.
Implications for AI Developers Inside and Outside Canada
For developers operating in Canada. Even where the standard is not legally mandated, federal entities and many private buyers may adopt CAN‑ASC‑6.2:2025 requirements as procurement conditions or risk management criteria. Developers should expect requests for: accessible documentation and interfaces aligned with EN 301 549; disaggregated performance metrics and validation evidence; demonstrable inclusive design and compensated engagement of people with disabilities; public transparency artifacts; human‑alternative pathways; and integration with internal governance, registry, and incident response processes.
For developers outside Canada selling into Canadian markets. Cross‑border providers offering AI systems to Canadian customers—including cloud‑based providers—should anticipate contractual flow‑down of these requirements, third‑party conformance verification, and heightened scrutiny of data practices under Canadian privacy laws. Notably, the standard references alignment with international standards (e.g., ISO/IEC 42001) and requires interface accessibility against EN 301 549, which can support interoperability for global vendors but will still require Canada‑specific process controls (e.g., public notice, registry of harms, and human‑alternative mechanisms).
For foundation model providers and toolchain vendors. The standard reaches upstream. It requires accessibility of tools used to design, develop, deploy, and oversee AI, and it extends to outputs produced by AI systems, including tools created using AI. Vendors supplying model development platforms, MLOps tools, and assistive AI components should be prepared to demonstrate accessibility conformance and to support customer obligations across governance, monitoring, and transparency.
For organizations using AI in high‑risk contexts or with disability impacts. The standard elevates protection thresholds in scenarios with potential cumulative harms, outlier effects, or misrepresentation in data. It calls for prioritizing risk prevention for people with disabilities regardless of quantitative certainty and mandates conditions to halt or terminate systems if accessibility or equity degrade—a strong signal for conservative operational risk management and structured escalation paths.
Practical Steps to Align
Developers should consider integrating the following into product and compliance roadmaps to meet buyer expectations in Canada:
- Embed inclusive design with compensated participation by people with disabilities across design, testing, and post‑deployment monitoring; validate equitable performance with disaggregated metrics and document tuning for equity.
- Produce accessible, plain‑language transparency artifacts pre‑deployment; maintain a changelog that keeps this material current with model and data updates.
- Build product features that enable human‑alternative workflows, contestability, and accessible feedback channels; provide APIs or modules to support buyer registries and incident response.
- Design for EN 301 549 accessibility across UIs, documents, support, and developer tools; ensure outputs generated by AI are also accessible.
- Establish dataset appropriateness reviews for disability contexts, with controls against biased proxies, mislabeling, synthetic data gaps, and context drift; document rationales per task and population.
- Prepare for independent expert assessments in procurement and for contract clauses enabling suspension/termination tied to accessibility and equity performance.
- Update training programs for engineering, product, legal, and operations with accessibility, privacy, bias mitigation, and harm detection modules co‑developed with people with disabilities.
Timeline, Review, and Relationship to Other Frameworks
The standard is set to be reviewed within four years (that is, by December 2029). It is framed as the first part of a multi‑part standard emphasizing adaptable, context‑sensitive requirements, with more precise technical guidance to follow. It aligns with the Accessible Canada Act and references Canada's Privacy Act and PIPEDA; it also signals alignment with international standards such as ISO/IEC 42001 and EN 301 549, and with international human rights instruments such as the UN Convention on the Rights of Persons with Disabilities.
Takeaway
For AI developers in and outside Canada, CAN‑ASC‑6.2:2025 establishes a rigorous, disability‑centered benchmark that is likely to shape Canadian procurement, governance, and market norms. Early adoption will reduce contracting friction, demonstrate responsible AI practices, and align products with accessibility and equity expectations that are increasingly converging across jurisdictions, while addressing specific Canadian process requirements around notice, alternatives, transparency, registries, and privacy.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.