Add a bookmark to get started

laptop on desk
11 March 202516 minute read

Ontario Human Rights Commission publishes Human Rights AI Impact Assessment Tool

As of the time of writing, the Canadian Parliament’s latest session was terminated as a result of prorogation. Prorogation is expected to last until March 24, 2025. This formal suspension of Parliament’s activities effectively terminated all bills in progress, including Bill C-27, introduced in 2021. Bill C-27 was set to replace and amend various legislation, as well as to establish Canada’s first legislative framework on artificial intelligence, the Artificial Intelligence and Data Act (the “AIDA” or the “Act”).

As noted in our recent article, this means that Canada’s federal privacy regime will remain as-is for the foreseeable future, without the modernizations and improvements that many were anticipating. Should Bill C-27 be re-introduced and picked up where it left off, it is not obvious that any new laws will be passed before the next federal election, which must take place by late October 2025 at the latest.

As a result, the only AI-related guidance at the federal level remains the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the “Voluntary Code”). At this time, there is no tribunal, commission, court or other adjudicative body with the formal role of enforcing or administering the Voluntary Code.

At a provincial level, provinces and territories have been slow to enact legislative provisions regulating the use and development of AI-based applications. Ontario is the only province that has enacted legislation which aims to regulate AI use within its boundaries.

Ontario passed amendments to its employment standards legislation that will eventually require employers to be more transparent in their job postings, including indicating their use of AI in drafting the job posting. Ontario Regulation 476/24: Rules and Exemptions Re Job Postings (the “Regulation”) was issued following those amendments. The Regulation will introduce a broad definition of “artificial intelligence” to include “a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”. The Regulation will not come into force until January 1, 2026.

The Ontario Human Rights Commission’s Human Rights Impact Assessment Tool

In late 2024, the Ontario Human Rights Commission (the “OHRC”) and the Law Commission of Ontario (collectively referred to as the “Commissions”) released a new Human Rights AI Impact Assessment (the “HRIA” or the “Tool”).

The HRIA is described as a “guide” and states that it “does not constitute legal advice and does not provide a definitive legal answer regarding any adverse human rights impacts, including violations of federal or provincial human rights law or other relevant legislation”. It also states that “an organization or individual will not be protected from liability for adverse human rights impacts, including unlawful discrimination, if they claim they complied with or relied on the HRIA.” In other words, organizations should be cognizant that compliance with the HRIA would not act as a complete shield. By virtue of Ontario’s Human Rights Code, the Human Rights Tribunal of Ontario, the adjudicative body responsible for enforcing human rights legislation with respect to provincially regulated employers in Ontario, is obligated to consider the OHRC’s policies in a proceeding if a party or intervenor requests that it do so. Such intervenor may include the OHRC itself.

Purpose and use of the HRIA

The purposes of the HRIA are described as (i) strengthening knowledge and understanding of human rights impacts; (ii) providing practical guidance on human rights impacts, particularly in relation to non-discrimination and equality of treatment; and (iii) identifying practical mitigation strategies and remedies to address bias and discrimination from AI systems.

The HRIA was created for use by any organization, public or private, intending to design, implement, or rely on an AI system. While the HRIA was designed with a focus on the laws in Ontario, the Commissions indicated that it could be useful to any organization or individual across Canada. The Commissions noted that the HRIA was designed to apply broadly to “any algorithm, automated decision-making system or artificial intelligence system”, referred to collectively as an “AI system” in the HRIA. This definition of AI system is even broader than the definition of “artificial intelligence” under the Regulation discussed above, as it appears to extend to “algorithms” as well.

Structure of the HRIA

The HRIA is divided into two parts: Part A (Impact and Discrimination), and Part B (Mitigation). Each part contains numerous sections which represent categories of self-assessment questions.

In Part A, there are five categorical sections of questions: (1) Purpose of the AI system; (2) Is the AI system at high risk for human rights violation?; (3A) Does the AI system show high risk of differential treatment? (3B) Is the differential treatment permissible?; (4) Does the AI system consider accommodation; and (5) Results.

Under the Results section, the AI system is ranked into one of the six categories, depending on answers to selective questions for each category, to assist users of the Tool in assessing the potential adverse impact of the AI system under evaluation.

Once the AI system has been assessed from a risk perspective, Part B outlines strategies to mitigate possible human rights issues arising from use of the AI system. Mitigation strategies are categorized into four sections: (1) Internal procedure for assessing human rights; (2) Disclosure, Data Quality, and Explainability; (3) Consultations; and (4) Testing and Review.

Part A of the HRIA: Impact and discrimination

The questions to be considered as part of the impact risk of the AI system being assessed are as follows:

Section 1: Purpose of the AI System

  1. What is the general function of the AI system?
  2. What is the intended purposes of the AI system? What are the main and secondary objectives? If there is more than one objective, they should be ranked.
  3. Who is the AI system designed to benefit? Who could be harmed by the AI system?
  4. What are the alternatives for meeting these objectives? Why is an AI system needed or preferred?

Section 2: Is the AI system at high risk for human rights violation?

  1. Does the AI system make a decision or provide information or a score that may influence a decision?
  2. Does the AI system make or aid decisions in an area covered by human rights law?
  3. Does the AI system employ biometric tools (e.g. Facial recognition technology, fingerprints, voice prints, gait analysis or iris scans)?
  4. Does the AI system track behavior? For example, does the AI system analyze keystroke patterns, purchasing habits, patterns of device use, or does use affect recognition?
  5. Does the AI system have the ability to influence, elicit, or predict human behavior, expression, and motion on a large scale?

The Commissions note that answering “yes” to question 5 and “yes” to questions 6-9 in Section 1, would then in their view signify that the AI system may be at high risk for human rights issues. According to the Commissions, a high risk finding requires ongoing human rights review, consideration and mitigation. The Commissions indicated that, in such circumstances, the assessment should be completed.

If the answer to question 5 was “yes” but “no” to questions 6-9, it is the Commissions’ view that the AI system is not at high risk for human rights issues, but that questions 10-13 should be completed.

If the answer to question 5 was “no” but the answer to any question between 6 and 9 was “yes”, the Commissions’ view is that human rights issues could arise but the AI system is not at high risk at the present time. That said, the rest of the assessment should be completed to monitor the AI system for any drift or change.

If the answer was “no" to all questions between 5 and 9, included, the Commissions’ view is that the AI system is not at high risk for human rights issues, but questions 10-13 should still be completed. If you do not know the answer to one or more of these questions, the Tool encourages users to seek input from colleagues and experts who do. After this section is completed, you should continue the assessment

The remaining questions in Section 2 of the Tool are set out below.

  1. Is the AI system operating in an area where there have been concerns raised about bias and discrimination in the past?
  2. Who is subjected/exposed to the AI system? Be specific.
  3. What are the demographics of the people who are subjected/exposed to the AI system? Be specific.
  4. Does the AI system have the potential to impact a historically disadvantaged group?

If the answer to questions 10 or 13 is “yes”, the Commissions’ view is that the AI system is at high risk for human rights issues and that sections 3 and 4 should be completed. If the answer to questions 10 or 13 is “no”, then the Commissions’ view is that the AI system is not at high risk on the basis of who is affected by the AI system. However, the system may still be at high risk if it is a high risk use or context (see questions 5-9). The Commissions state that, if you are not at high risk after answering questions 5-13, you do not need to complete the rest of the assessment, but that the assessment should be revisited within 90 days of a material change in the AI system and as part of annual maintenance. That said, we recommend that, before completing any questions relating to risk of non-compliance with a human rights statute, legal advice be sought and the practical considerations set out at the end of this article be considered.

Section 3A: Does the AI system show differential treatment?

  1. What are the demographic characteristics of the people flagged by the AI system, or for whom it recommends or makes a decision? Be specific.
  2. Does the AI system produce results that differentiate based on one or more protected grounds?
  3. Have you tested or validated the AI system to see what factors it relies on? Does it rely on factors that correlate with a protected ground?
  4. Does the AI system assign characteristics to individuals based on proxies and other available data? Does the system produce outputs based on personal characteristics of individuals that are assumed, and not explicitly available in data? Does the technical system rely on a statistical model of human behavior or personal characteristic?

The Commissions’ view is that, if the answer to questions 15, 16 or 17 is “yes”, then the AI system displays disparate treatment on protected grounds. In this case, the questions under Section 3B could be used to determine whether the disparate treatment may be legal discrimination. We note, however, that it would be prudent for organizations not to complete those questions without first seeking legal advice. 

The remaining questions in Section 3A of the Tool are set out below.

  1. Are there gaps or limitations in your ability to meaningfully answer questions 14-17? If you are not able to test the AI system for differential treatment on protected grounds, make a record of the gaps and limitations in the data and go to question 19.
  2. What is the cause of those limitations? Are they surmountable?

Section 3B: Is the Differential Treatment Permissible?

  1. Is the purpose of the AI system, directly or indirectly, to advance a historically disadvantaged group?

If the answer to question 20 is “yes”, the Commissions state that the differential treatment is likely not discrimination. However, we note that not all initiatives with the aim to advance a historically disadvantaged group would be considered “special programs” for the purpose of the Ontario Human Rights Code or programs of a similar nature under other human rights legislation. If your organization is unsure of the answer to question 20, organizations should seek legal advice on whether they have (or wish to have) a special program in place in order to justify usage of the AI system.

  1. If an individual or community is excluded from the results of the AI system, will it have a negative or adverse impact in their life? Or will being included in the results of the AI system have a negative or adverse impact on the individuals affected?

If the answer to question 21 is “no”, the Commissions’ view is that the differential treatment is likely not discrimination. However, we note that “adverse impact” determinations are made by human rights tribunals, commissions, courts, and other adjudicators on a case-by-case basis. As such, caution may be warranted when answering this question, and legal advice should be sought before addressing this question.

  1. Is there a justifiable reason for why the system is showing differential treatment?

If the answer to question 22 is “no”, then the Commissions’ view is that the differential treatment is likely not discrimination. However, we note that what organizations consider to be justifiable from a business or common-sense perspective may not necessarily be consistent with the assessment of a human rights tribunal, commission, court or another adjudicator. Accordingly, we recommend that legal advice be sought when addressing questions regarding the justifiability of any differential treatment in which any prohibited ground of discrimination may be a factor, even if unintentionally.

Section 4: Does the AI system consider accommodation?

This section asks whether the AI system equally available, accessible, and relevant to all parties, such that the rights or needs of communities represented under all protected grounds have been considered in the creation of the AI system; whether the accessibility and availability of the AI system were tested with diverse populations to ensure that it is accessible to all parties; whether the AI system respect the rights of children and takes their best interests into account; and whether processes were put in place to test and monitor the AI system during development, deployment and use phases to uncover potential harm to children.

The Commissions note that, if the organization has not considered how different populations might need to be accommodated, and addressed accommodation needs, there may be a violation of human rights obligations. We recommend that, as always, organizations seek legal advice with respect to accessibility and accommodation issues. If the AI system will impact children or be used by children (or age restrictions are to be implemented), legal advice may also be of assistance as different Canadian jurisdictions have different age-based discrimination rules and exceptions.

Section 5: Results

Based on the user’s responses to questions in each categorical section above, the AI system will be ranked in one of six categories. According to the Commissions, if the AI system is at “high risk” for human rights issues and you are unable to determine if its results are discriminatory, “you are in a precarious position” as organizations in Ontario have an obligation to ensure that the products and services they provide do not violate human rights law. Employers are reminded that they are also required to comply with human rights law and avoid unlawful discrimination in employment, and that responsibility for any inadvertent discrimination by employers who use AI-based tools cannot be contractually disclaimed or waived.

Part B of the HRIA: Mitigation strategies

This part of the Tool is divided into four mitigation-related sections: (1) Internal Procedures for Assessing Human Rights; (2) Explainability, Disclosure and Data Quality; (3) Consultations; and (4) Testing and Review. The assessment questions per section are outlined below, with key considerations.

Section 1: Internal Procedures for Assessing Human Rights encourages organizations to develop an internal human rights review system by encouraging organizations to consider questions about whether the organization has created a process to review and assess human rights regularly throughout the lifecycle of the AI; what stage of the AI lifecycle the organization is in; how often the team meets to review and assess human rights for this AI system, and who is included in such team; if a human rights issue is flagged during the assessment, whether it is clear who should be informed; who has the knowledge and authority to assess, address, and mitigate such an issue; oversight of the human rights assessment; and whether individuals are encouraged to flag human rights issues without concern for repercussions.

Section 2: Explainability, Disclosure, and Data Quality focuses on those subject matters that, if properly addressed before roll-out, can assist in avoiding inadvertent links between AI outputs and individuals or groups identified by prohibited grounds of discrimination. In particular, the questions under this section inquire about the transparency and disclosure of AI systems; about AI outputs, data accuracy, and data reliability; as well as about explainability models (i.e., the process of making an AI system or decision comprehensible to humans).

Section 3: Consultations asks questions aimed at engaging people who are likely to be impacted by an AI system in the design, purposes, and end use of the AI system, which can help address unintended consequences and applications of an AI system.

Finally, Section 4: Testing and Review addresses three important components of AI testing and review: AI auditing, metrics testing, and de-biasing. The HRIA states that any AI system that has been identified as a high-risk system should be tested and reviewed frequently.

Practical considerations prior to using the HRIA

Before using or completing the Tool, organizations should consider obtaining legal advice on who should complete the assessment and how the assessment should be completed with a view to ensuring that the Tool is not used before appropriate confidentiality, privacy, security, and legal privilege considerations are taken into account.

In addition, should organizations encounter difficulties in assessing the use or development of “AI systems” as described in the Tool, organizations should consider discussing with their legal teams how to engage an expert while not compromising the considerations noted above.

If you have any questions regarding the Tool, human-rights related aspects of AI impact assessments, or the assessment of AI systems for legal compliance from a human rights perspective, please do not hesitate to reach out to the authors or to any member of DLA’s Employment and Labour Group.
Print