Skip to main content

Implementing CAN/ASC-6.2:2025 Accessibility Requirements for AI Systems

Illustration of diverse users including a wheelchair user and guide dog interacting with an AI-powered system represented by a digital brain, highlighting inclusive AI accessibility.

Artificial intelligence is rapidly reshaping how organizations deliver services, make decisions, and interact with users. From automated hiring tools to AI-powered customer support, these systems increasingly influence daily life. As adoption accelerates, a critical challenge has emerged: ensuring AI systems are accessible, equitable, and respectful of the rights of people with disabilities.

Canada’s new accessibility standard, CAN/ASC-6.2:2025, establishes a comprehensive framework for addressing this challenge. For organizations that rely on Digital accessibility services, the standard provides clear technical and organizational requirements for building, procuring, and managing accessible and equitable AI systems.

What Is CAN/ASC-6.2:2025?

Published in December 2025 by Accessibility Standards Canada, CAN/ASC-6.2:2025 is Canada’s first accessibility standard dedicated specifically to artificial intelligence systems. It aligns with major legal and human rights frameworks, including the Accessible Canada Act, the Canadian Human Rights Act, and the United Nations Convention on the Rights of Persons with Disabilities.

The standard applies broadly to organizations that develop, deploy, procure, or manage AI systems. Whether an organization builds custom AI models, integrates third-party AI tools, or delivers AI-powered services, CAN/ASC-6.2:2025 establishes mandatory accessibility and equity requirements that must be addressed across the AI lifecycle.

Purpose and Human-Rights-Based Goals

At its core, CAN/ASC-6.2:2025 takes a human-rights-based approach to AI governance. It defines four overarching goals:

  • Ensuring people with disabilities receive equitable benefits from AI systems
  • Preventing discrimination, exclusion, privacy violations, and loss of autonomy
  • Protecting the rights, dignity, and freedoms of people with disabilities
  • Providing meaningful choice, including the option to decline AI in favor of human alternatives

These goals move accessibility beyond technical compliance and position it as a fundamental requirement for ethical AI.

Alignment with Existing Standards

CAN/ASC-6.2:2025 does not exist in isolation. It aligns with established accessibility and AI management standards, including CAN-ASC-EN 301 549 for ICT accessibility, CSA ISO/IEC 42001 for AI management systems, and ISO/IEC 30071-1 for inclusive design practices.

For organizations already working with Digital accessibility services, this alignment allows AI accessibility requirements to be integrated into existing accessibility programs, governance structures, and compliance workflows.

Scope and Structure of the Standard

The standard is organized into foundational and substantive clauses. Clauses 1 through 9 establish governance, legal context, and guidance for interpretation. Clauses 10 through 13 define enforceable requirements across four key areas:

  • Accessible AI
  • Equitable AI
  • Organizational processes to support accessibility and equity
  • Education and training

This structure ensures that accessibility is embedded throughout AI systems rather than addressed as a post-deployment fix.

Accessible AI: Designing for Inclusion Across the Lifecycle

Accessible AI requires that people with disabilities can meaningfully participate in every phase of the AI lifecycle. This includes design, development, testing, procurement, deployment, and ongoing monitoring.

Technically, this means ensuring AI interfaces, outputs, and management tools are compatible with assistive technologies such as screen readers, speech input, and alternative navigation methods. It also requires accessible documentation, plain-language explanations of AI decision-making, and accessible feedback mechanisms.

A critical requirement is choice. When AI systems are used in high-impact contexts such as healthcare, legal services, or education people with disabilities must be able to request human alternatives. For example, an AI-powered sign language interpretation system must not eliminate access to human interpreters when accuracy or safety is critic al.

Equitable AI: Preventing Harm and Bias

Equitable AI focuses on fairness and harm prevention. CAN/ASC-6.2:2025 requires organizations to actively identify and mitigate risks that disproportionately affect people with disabilities. This includes bias in training data, unequal accuracy across disability groups, and misuse of AI for surveillance or behavioral judgment.

Organizations must evaluate cumulative harms over time, protect disability-related data, and ensure informed consent before deploying AI systems. Transparency is essential—users must understand how AI systems work, what data they use, and how decisions can be challenged.

From a Digital accessibility services perspective, this requires combining accessibility testing with bias analysis, data governance reviews, and assistive-technology-based user testing.

Organizational Processes and Governance

The standard goes beyond technical controls by mandating organizational processes that support accessible and equitable AI. Organizations must establish AI governance structures that include people with disabilities in decision-making roles. Risk assessments must consider both major and minor harms, with disability-specific impacts explicitly evaluated.

Procurement processes must require accessibility and equity criteria, including testing AI tools before implementation and enforcing contractual accountability. Continuous monitoring is mandatory, with public records of harms and clear escalation procedures.

Staff training is also essential. Everyone involved in AI development or management must receive accessible, role-specific training on equitable AI practices, with people with disabilities involved in designing and delivering that training.

Accessible Education and AI Literacy

CAN/ASC-6.2:2025 recognizes that inclusive AI depends on accessible education. Organizations must provide accessible AI literacy resources that help people understand how AI affects their rights, choices, and autonomy.

This includes training technical professionals on accessibility requirements, educating users about AI decision-making, and empowering people with disabilities to participate in AI governance and feedback processes.

Why CAN/ASC-6.2:2025 Matters

For organizations across sectors, CAN/ASC-6.2:2025 offers a clear roadmap for compliance, risk reduction, and ethical AI adoption. It helps organizations reduce legal exposure, improve AI quality, build trust, and future-proof AI investments.

Most importantly, it ensures that artificial intelligence systems serve everyone without exclusion.

Getting Started with Compliance

Implementing CAN/ASC-6.2:2025 requires structured planning. Organizations should begin by auditing existing AI systems, establishing inclusive governance, defining procurement standards, embedding accessibility into AI design, training staff, and continuously monitoring real-world impacts.

Partnering with experienced Digital accessibility services providers can help organizations translate these requirements into practical, defensible, and scalable AI accessibility strategies.

Previous