ARTICLE
13 November 2025

Accessibility And Artificial Intelligence In The Workplace: Accessible And Equitable Artificial Intelligence Systems Draft Standard ASC-6.2

SU
Singleton Urquhart Reynolds Vogel LLP

Contributor

A Canadian national law firm that specializes in the construction and infrastructure, insurance, and real estate sectors. The firm consistently ranks first among Canadian construction and infrastructure firms and features prominently in the delivery of commercial litigation, corporate-commercial and employment law services.
In the workplace, artificial intelligence ("AI") can be a powerful tool that increases productivity while saving time and costs.
Canada Employment and HR
Singleton Urquhart Reynolds Vogel LLP are most popular:
  • with readers working within the Insurance and Property industries

In the workplace, artificial intelligence ("AI") can be a powerful tool that increases productivity while saving time and costs. However, AI also poses risks, such as data privacy concerns, intellectual property infringement, and disruption to the job market. In addition, there are accessibility and equity concerns that accompany the rise of AI, which Accessibility Standards Canada ("ASC") has attempted to mitigate in its March 2025 draft of the ASC-6.2 Accessible and Equitable Artificial Intelligence Systems standard (the "Draft AI Standard"), which underwent public review from March 6 to May 5, 2025.1

The Draft AI Standard aims to ensure that AI systems are accessible to persons with disabilities, from the initial design of a system to its eventual implementation.2 Specifically, the Draft AI Standard sets out how to:

  • make AI systems accessible to people with disabilities;
  • ensure that AI systems make decisions and treat persons with disabilities in an equitable manner;
  • ensure that organizations set up processes necessary to achieve accessible and equitable AI systems; and
  • educate people about how to achieve accessible and equitable AI systems.3

APPLICATION AND SCOPE

ASC's standards are generally limited in application, and they are currently only voluntary.4 However, the Draft AI Standard provides a helpful guide to navigating accessibility issues when implementing AI technology and foreshadows future accessibility requirements.

Presently, ASC's standards apply only to federally-regulated bodies and organizations, which include parliamentary bodies, federal government departments and agencies, the Royal Canadian Mounted Police, and federally-regulated private sectors.5 Private sectors that fall under federal rule include air transportation, banks, most federal Crown corporations, postal and courier services, and radio and television broadcasting.6

Although ASC's standards are only voluntary, ASC notes on its website that it will recommend its standards to the Minister to be adopted as regulations.7 If the Minister chooses to adopt a standard into a regulation, it will become mandatory and can be enforced. ASC also uses mandatory language throughout the Draft AI Standard to encourage compliance with its suggestions and guidelines.

FOUR THEMES IN THE DRAFT AI STANDARD

The following four themes emerge from the Draft AI Standard, each of which is discussed below:

  1. Full participation in the AI lifecycle and AI-related decision making
  2. Assessment and monitoring of risks and impacts
  3. Transparency and procedural mechanisms
  4. Accessible and equitable use of the AI system

1. Full participation in the AI lifecycle and AI-related decision-making

The Draft AI Standard states that persons with disabilities must have the opportunity to participate fully in all roles and in all stages of the AI lifecycle.8 ASC notes that an AI lifecycle includes datasets, AI systems and component creation (design, coding, implementation, evaluation, refinement), procurement, consumption, governance, management, and monitoring.

When making AI-related decisions, organizations should also try to involve persons with disabilities on a continuous and ongoing basis and specifically seek their input during the design, development, procurement, and customization of the organizations' AI systems.9

2. Assessment and monitoring of risks and impacts

Impact assessments during the planning stage

According to the Draft AI Standard, when an organization is planning to implement an AI system, it should consider its impact on persons with disabilities and take appropriate measures to prevent harmful impacts to persons with disabilities.10 An organization's impact and risk assessment processes must:

  • include persons with disabilities that may be impacted by the AI system;
  • determine the AI system's impact on the broadest range of persons with disabilities as possible11; and
  • account for the aggregate impact on persons with disabilities of any cumulative harms.12

Ongoing impact assessments and monitoring of potential harms

In addition to assessing risk and impact during the planning stage, organizations must also conduct ongoing impact assessments and data quality monitoring throughout the AI lifecycle to identify emerging or actual bias and/or discrimination toward persons with disabilities, such as a lack of equitable access to benefits or an undermining of individual agency.13 Organizations must also work with national disability organizations and organizations with expertise in accessibility and disability equity to establish thresholds for unacceptable levels of risk and harm.14

Halting or terminating an AI system

Organizations must recognize when their AI system has degraded to a point where accessibility and equity for persons with disabilities is so compromised that the system should no longer be used.15 Where this is the case, organizations must stop using the AI system until the accessibility barrier or inequitable treatment is remediated.16 Therefore, organizations should ensure that their procurement contracts with respect to AI systems enable them to halt or terminate the AI system when necessary.17

3. Transparency and procedural mechanisms

Under the Draft AI Standard, organizations must ensure that they have established processes that provide sufficient transparency and accountability throughout the AI lifecycle.

Transparency

Before deploying an AI system, an organization must notify national disability organizations and interested parties of its intention to use the AI system.18 This notice must be public and in an accessible format, so interested parties can request to receive future notices.19

During the AI lifecycle, organizations must also consistently disclose the data that they receive from impact assessments and ongoing monitoring of potential harms in an accessible and non-technical manner.20 Under the Draft AI Standard, organizations must keep a public registry of harms, contested decisions, reported barriers to access, reports of inequitable treatment of persons with disabilities related to AI systems, and feedback related to harms (with the consent of the individuals submitting the feedback).21

Collection of data

Where data is collected by consent, persons with disabilities must have the ability to withdraw their consent for the use of their data at any time and without negative consequences.22 Where data is collected without consent, organizations must involve persons with disabilities in decisions about how the datasets are used.23

Feedback, complaints, and appeals

The design of an AI system must include accountability and governance mechanisms that clearly indicate the party accountable for decisions made by the AI system.24 Combined with this, the Draft AI Standard requires that organizations implement a procedure for feedback, complaints, and appeals that:

  • acknowledges receipt of feedback, incidents, and complaints and provides a response within 24 hours;
  • provides a timeline for addressing feedback, incidents, and complaints;
  • provides persons with disabilities the opportunity to offer feedback or contest decisions anonymously;
  • communicates the status of efforts taken to address feedback, incidents, and complaints; and
  • provides an opportunity for persons with disabilities to appeal or contest any proposed remediation.25

Alternatives to AI

Organizations must also provide alternatives to AI systems for persons with disabilities, including the option to request that decisions be made by persons with knowledge and expertise in the needs of persons with disabilities.26

4. Accessible and equitable use of the AI System

Under the Draft AI Standard, organizations must first assess whether a dataset containing information about persons with disabilities should be used as an AI system input, based on whether the dataset aligns with the objective of the AI system.27 They must then ensure that AI systems are not negatively biased and discriminatory towards persons with disabilities and must seek to prevent such situations, including when synthetic data includes an insufficient sample of disability experiences relevant to the purpose of the AI system.28

Throughout the AI lifecycle, in addition to complying with the Draft AI Standard, organizations should also ensure that they meet the requirements of ASC's Accessibility Requirements for ICT Products and Services National Standard (the "ICT Standard"), which sets out accessibility requirements for Information and Communication Technology (ICT) products and services, including web-based, non-web-based, and hybrid technologies.29

Takeaways

Given that the Draft AI Standard may soon be adopted into a regulation, we recommend that employers consider implementing the requirements and practices that it sets out, or at least turn their minds to the impact of AI systems on persons with disabilities. A good first place to start is to seek feedback from and encourage the participation of persons with disabilities in all AI initiatives.

If you have any questions about how to increase accessibility in your workplace, please contact the writers of this article. Our employment and labour lawyers would be happy to assist you.

Footnotes

1. https://accessible.canada.ca/creating-accessibility-standards/public-reviews

2. An AI system is defined in the Draft AI Standard as a "technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning, or another technique in order to generate content or make decisions, recommendations, or predictions."

3. Draft AI Standard, Preface.

4. https://accessible.canada.ca/about-us

5. https://accessible.canada.ca/about-us

6. https://www.canada.ca/en/services/jobs/workplace/federally-regulated-industries.html

7. https://accessible.canada.ca/about-us

8. Clause 5.1.1

9. Clauses 5.3.1, 5.3.5, 5.3.6, and 5.3.7

10. Clause 5.3.2

11. In Clause 5.2.2, ASC notes that risk assessments should prioritize persons who are minorities and who experience the greatest harmful impact.

12. Ibid.

13. Clause 5.3.8

14. Ibid.

15. Clause 5.3.13

16. Ibid.

17. Clause 5.3.6

18. Clause 5.3.3

19. Ibid.

20. Clause 5.3.10

21. Clauses 5.3.8 and 5.3.12

22. Ibid.

23. Ibid.

24. Clause 5.3.5

25. Clause 5.3.12

26. Clause 5.3.11

27. Clause 5.3.4

28. Clauses 5.2.2 and 5.3.4

29. Clause 5.1.1 and Clause 5.1.2.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More