ARTICLE
12 April 2022

U.S. House And Senate Reintroduce The Algorithmic Accountability Act Intended To Regulate AI

MT
McCarthy Tétrault LLP

Contributor

McCarthy Tétrault LLP provides a broad range of legal services, advising on large and complex assignments for Canadian and international interests. The firm has substantial presence in Canada’s major commercial centres and in New York City, US and London, UK.
On February 3, 2022, U.S. Democratic lawmakers introduced in both the Senate (S. 3572) and the House of Representatives (H. R. 6580) a bill titled the "Algorithmic Accountability Act of 2022" (the "U.S. Act").
Worldwide Privacy

On February 3, 2022, U.S. Democratic lawmakers introduced in both the Senate (S. 3572) and the House of Representatives (H. R. 6580) a bill titled the "Algorithmic Accountability Act of 2022" (the "U.S. Act"). This act aims to hold organizations accountable for their use of algorithms and other automated systems that are involved in making critical decisions which affect the lives of individuals in the U.S. Among other requirements, the U.S. Act would mandate covered entities to conduct impact assessments of the automated systems they use and sell in accordance with regulations that would be set forth by the Federal Trade Commission ("FTC").

The U.S. Act, which we reported on in a previous blog, was first introduced in April 2019, but failed to gain the support it needed to become law. It has since undergone significant modifications and is now reintroduced in a context where regulating AI is at the forefront of the legislative agenda of several U.S. states and other jurisdictions, notably the EU with its proposed Artificial Intelligence Act (the "EU Act"). Despite the update to the 2019 version, the underlying policy objective of the U.S. Act remains the same. Over the past years, there have been numerous reports of flawed algorithms rendering decisions that magnify societal injustice or even result in dangerous outcomes, notably in the context of healthcare, lending, housing, employment, and education.1 The U.S. Act thus intends to increase transparency over how algorithms and automated systems are used in decision-making contexts in order to reduce discriminatory, biased or harmful outcomes.

In this article, we highlight key aspects of the U.S. Act, compare the U.S. Act's requirements with those of the EU Act, and provide insights on the future of AI legislation in Canada. 

Covered entities and key definitions

In its current form, the U.S. Act applies to businesses falling under its definition of "covered entities". Those can be divided into two broad categories: (i) businesses that deploy "augmented critical decision processes" ("ACDP"); and (ii) businesses that deploy "automated decision systems" ("ADS") which are then used by the first category of businesses in an ACDP.

The first category covers big players with significant market presence or a large user base,2 whereas the second category also captures smaller players3 who act as ADS suppliers of businesses falling under the first category.4

An ACDP is defined as "a process, procedure, or other activity that employs an automated decision system to make a critical decision", whereas an ADS is broader and covers "any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment".5 As for what constitutes a "critical decision", the U.S. Act lists several categories of decisions which have a significant impact on individuals' lives, namely decisions related to access to or the cost of education, employment, essential utilities, family planning, financial services, healthcare, housing, and legal services.6

Within two years of its enactment, the U.S. Act will require the FTC to promulgate regulations that require covered entities to perform impact assessments of (i) any deployed ACDP or (ii) any deployed ADS developed for use by a covered entity of the first category in an ACDP.7 Based on the results of these impact assessments, covered entities will be required to "attempt to eliminate or mitigate, in a timely manner, any impact made by an augmented critical decision process that demonstrates a likely material negative impact with legal or similarly significant effects on a consumer's life".8 The exact method for eliminating or mitigating said negative impacts will once again depend on FTC regulations. Unlike the EU Act, the U.S. Act does not set a range of administrative fines that can be imposed in case of non-compliance, but it provides that any violation of the U.S. Act or its regulations shall be treated as "a violation of a rule defining an unfair or deceptive act or practice" as per section 18 of the FTC Act.9 Moreover, the FTC would have discretion to adopt further rules to ensure compliance, which could include penalties specific to the U.S. Act.10 The attorney general or any other authorized officer of a State may also initiate a civil action when there is reason to believe that the interest of the residents of the State are threatened or adversely affected by a practice that violates the U.S. Act or its regulations.11

The U.S. Act also provides several other requirements that the FTC will put in place through regulations for covered entities, such as: 

  • Maintaining documentation of impact assessments performed;
  • Disclosing their status as a covered entity;
  • Submitting on an annual basis, a summary report for ongoing impact assessment of any deployed ACDP or ADS; and
  • Submitting an initial summary report for any new ACDP or ADS prior to deployment.12

Content of the Impact Assessment

While the FTC still needs to define the precise form and content of impact assessments, which might vary depending on the industry and the size of the business, the U.S. Act already provides a long list of action items for covered entities to carry out when conducting them.13 We have extracted below some of the key action items from the U.S. Act and have organized them based on principles found in the Responsible AI Impact Assessment Tool developed by the International Technology Law Association, notably in collaboration with the authors of the present blog, Charles S. Morgan and Francis Langlois

  • Ethical Purpose and Social Benefit:  Evaluate any previously existing critical decision-making process used for the same critical decision prior to the deployment of the new ACDP, along with any related documentation or information, such as any known harm, shortcoming or other material negative impact on consumers of the existing process, the intended benefits of and need for the ACDP and the intended purpose of the ADS or ACDP.14
  • Accountability:  Identify any likely material negative impact of the ADS or ACDP on consumers and assess any applicable mitigation strategy, such as (i) by documenting any steps taken to eliminate or reasonably mitigate any likely material negative impact identified, including by removing the system or process from the market or terminating its development; and (ii) by documenting which such impacts were left unmitigated and the rationale for the inaction.15 Support and perform ongoing training and education for all relevant employees, contractors, or other agents regarding any documented material negative impacts on consumers from similar ADS or ACDP and any improved methods of developing or performing an impact assessment.16 
  • Transparency and Explainability:  Evaluate the rights of consumers, such as by assessing (i) the extent to which the covered entity provides consumers with clear notice that such system or process will be used and a mechanism for opting out of such use; and (ii) the transparency and explainability of such system or process and the degree to which a consumer may contest, correct, or appeal a decision or opt out of such system or process.17
  • Fairness and Non-Discrimination:  Perform ongoing testing and evaluation of the current and historical performance of the ADS or ACDP using measures such as benchmarking datasets, representative examples from the covered entity's historical data, and other standards, including by documenting an evaluation of any differential performance associated with consumers' race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status, and any other characteristics the FTC deems appropriate.18
  • Privacy and Security:  Perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the ADS or ACDP, such as by (i) assessing and documenting the data minimization practices of such system or process and the duration for which the relevant identifying information and any resulting critical decision is stored; (ii) assessing the information security measures in place with respect to such system or process; and (iii) assessing and documenting the current and potential future or downstream positive and negative impacts of such system or process on the privacy, safety, or security of consumers and their identifying information.19

Comparison with the EU's approach

Whereas the U.S. Act focuses on automated processes and systems deployed to render "critical decisions," the EU Act covers a wider range of AI systems and provides different regulatory requirements that scale with the level of risk that an AI system poses to the public. Currently, the EU Act separates AI systems into three categories: (i) unacceptable risk; (ii) high-risk; and (iii) low/minimal risk, which we addressed in greater detail in a previous article. Some similarities can be drawn between high-risk AI systems and ACDPs. Both are involved in important decisions that have a significant impact on individuals, such as decisions related to access to education, employment, and essential private services.20 However, high-risk AI systems also include AI systems used in the public sector (e.g. for law enforcement, border control, and administration of justice), which fall outside the scope of the U.S. Act.21

With regards to the enforcement of impact assessments, since the U.S. Act is still in its early stage and leaves much to be decided by the FTC, it is hard to tell whether the U.S. Act will resemble the EU Act's "conformity assessment" framework. In its current form, the EU Act puts heavy focus on requiring businesses to conduct ex ante conformity assessments to ensure that high-risk AI systems comply with all regulatory requirements prior to entry into the market. Just as for physical products under existing EU product safety legislation, these systems will have to bear a CE marking22 indicating their conformity before being traded within the EU.23 In contrast, the U.S. Act does not frame AI regulation as a subset of product safety legislation, but rather as a standalone regime.24 In addition, unlike conformity assessments which refer to essential requirements under Title III, Chapter 2 of the EU Act as a benchmark of compliance for high-risk AI systems and their risk management system, the U.S. Act does not currently require impact assessments to confirm adherence of the ACDP to pre-established standards.25 Instead, as noted above, it is left to the FTC to set out requirements for when covered entities must "attempt to eliminate or mitigate" material negative impacts on consumers' lives revealed by an impact assessment.26 Moreover, when conducting impact assessments, covered entities are to carry out the action items listed in the U.S. Act "to the extent possible, as applicable to such covered entity as determined by the [FTC]".27 Thus, rather than set broad standards which apply to all high-risk AI systems, the U.S. Act leaves room for tailored requirements which may vary from one covered industry to another. The U.S. Act also provides that covered entities may justify their inability to fully conduct impact assessments, a possibility not contemplated for conformity assessments under the EU Act.28 Whether the nuances between the two regimes may ultimately translate into practical differences will largely depend on the FTC as well as possible amendments to the proposed legislations.

What's next for Canada?

Currently, Canada does not have any law dealing specifically with AI that would be comparable to the U.S. Act or the EU Act. In the public sphere, Canada has the Directive on Automated Decision-Making (the "Directive") which requires an algorithmic impact assessment for each automated decision-making system deployed by a federal institution. The impact of the Directive on businesses was limited to companies providing technology that includes elements of automated decision-making to the federal government and there has since been no equivalent policy for the private sector. For more information on the Directive, we invite you to consult our previous publication.

In Canada, AI law is still treated as an add-on to privacy law. Some legislative proposals to modernize privacy laws, such as the federal Bill C-11 as well as the Quebec Bill 64 (which main provisions will enter into force over the next three years, starting in September 2022), contain transparency and explainability requirements partly inspired by the EU's General Data Protection Regulation provision on automated decision-making and profiling.29 However, those bills do not go so far as to require businesses to conduct AI impact or conformity assessments. The current version of Bill C-11 requires businesses to make available "a general account of the organization's use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them" and to provide upon request from an individual, "an explanation of the prediction, recommendation or decision [made by the automated decision system]".30 Bill 64 also provides similar obligations, with the additional requirement of giving the persons concerned by the decisions "the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision".31 Although these obligations are a lot less onerous for businesses than conducting impact assessments, they potentially cover a broader range or decisions, as they are not limited to critical decisions.

We note however that the Bill 64 includes a provision that requires organizations to perform "privacy impact assessments" to evaluate projects that involve the acquisition or development of information systems or electronic delivery systems that involve the processing of personal information. Even if those are specifically privacy  impact assessments, they may be required in certain contexts involving AI systems.

While actions from provincial and federal legislatures with regards to AI regulation have been limited, other public bodies have taken great interest in this area. In November 2020, the Office of the Privacy Commissioner of Canada (OPC) issued recommendations to Parliament on a regulatory framework for AI as well as policy proposals for privacy law reform. The proposals suggested introducing language in privacy legislation to hold organizations responsible for harm resulting from use of artificial intelligence if they fail to take preventative measures such as by conducting privacy impact assessments or third-party audits.32 However, the proposals did not favour mandating these measures under threat of sanctions, as is the case with the current U.S. and European approach. On a more sectoral level, the Autorité des marchés financiers in Quebec published a report in November titled "L'Intelligence artificielle en finance : Recommandations pour une utilisation responsable" (Artificial Intelligence in Finance: Recommendations for Its Responsible Use) (in French only) which provides key recommendations regarding the use of AI in the financial industry. Ensuring that AI systems do not exacerbate discrimination and social inequity was among the main issues addressed by the report. The report also recommended that the regulator adopt a model framework for the responsible use of AI in finance.33 This shows that it is likely that financial institutions in Quebec will be encouraged by the AMF in the coming years to self-regulate their use of AI based on ethical principles and industry best practices, a trend which could extend to other economic sectors and other provinces.

Conclusion

Regulation of modern AI systems began with the GDPR as one component of a privacy focused-legislation. This approach has proven influential, as shown by the recent Canadian legislative proposals that adopt this model. With the EU Act, and now the U.S. Act, we are witnessing the birth of AI law as an autonomous legal regime, yet one that remains close to privacy law, as shown in particular by the U.S. Act's focus on impact assessments, opt-out mechanisms and transparency obligations.

For the U.S. Act to become law, it will have to go through multiple rounds of revisions, debates, and amendments and would require majority support in both the House of Representatives and the Senate. Should the U.S. Act receive the support it needs, it would still take several years before we can assess whether the U.S. Act achieves its policy objectives and whether the cost of compliance is adequately proportionate to the protection it offers consumers. The success or failure of both the U.S. and Europe in this area will undoubtedly serve as an important precedent for other legislators around the world, including in Canada.

Footnotes

1. For a list of supporters, see "Support for the Algorithmic Accountability Act of 2022" available at (https://www.wyden.senate.gov/imo/media/doc/Support%20for%20the%20Algorithmic%20Accountability%20Act%20of%202022.pdf)

2. The first category of covered entities includes businesses that had greater than $50,000,000 in average annual gross receipts or are deemed to have greater than $250,000,000 in equity value for the 3-taxable-year period preceding the most recent fiscal year OR possess, manage, modify, handle, analyze, control, or otherwise use identifying information about more than 1,000,000 consumers, households, or consumer devices for the purpose of developing or deploying any ADS or ACDP.

3. The second category of covered entities includes businesses that had greater than $5,000,000 in average annual gross receipts or are deemed to have greater than $25,000,000 in equity value for the 3-taxable-year period preceding the most recent fiscal year AND deploy any automated decision system that is developed for implementation or use, or that the covered entity reasonably expects to be implemented or used, in an ACDP by any covered entity of the first category.

4. For a complete description of conditions for qualifying as a "covered entity," see Sec. 2(7) of the U.S. Act.

5. Sec. 2(1)&(2) of the U.S. Act.

6. Sec. 2(8) of the U.S. Act.

7. Sec. 3(b)(1)(A) of the U.S. Act.

8. Sec. 3(b)(1)(H) of the U.S. Act.

9. Sec. 9(a)(1) of the U.S. Act.

10. Sec. 9(a)(2)(D) of the U.S. Act.

11. Sec. 9(b)(1)-(3) of the U.S. Act.

12. Sec. 3(b)(1)(B)-(E) of the U.S. Act.

13. For the full list, see Sec. 4 of the U.S. Act.

14. Sec. 4(a)(1) of the U.S. Act.

15. Sec. 4(a)(9) of the U.S. Act.

16. Sec. 4(a)(5) of the U.S. Act.

17. Sec. 4(a)(8) of the U.S. Act.

18. Sec. 4(a)(4)(E) of the U.S. Act.

19. Sec. 4(a)(3) of the U.S. Act.

20. Annexe III of the EU Act.

21. Ibid; see also Sec. 2(7)(A) of the U.S. Act and Sec. 5(a)(2) of the Federal Trade Commission Act (15 U.S.C. 45(a)(2)).

22. See CE marking, available at (https://ec.europa.eu/growth/single-market/ce-marking_en) which provides: "The letters 'CE' appear on many products traded on the extended Single Market in the European Economic Area (EEA). They signify that products sold in the EEA have been assessed to meet high safety, health, and environmental protection requirements."

23. Recital 67 of the EU Act.

24. Sec. 3 (b)(1)(A)(ii) of the U.S. Act.

25. Article 43(1)(2) and Annexe VI of the EU Act.

26. Sec. 3(b)(1)(H) of the U.S. Act.

27. Sec. 4(a) of the U.S. Act.

28. Sec. 4(12) of the U.S. Act. 

29. Art. 22 of the GDPR.

30. Sec. 62(2)(c) and 63(3) of Bill C-11.

31. Sec. 12.1 of the U.S. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, as amended by Bill 64.

32. See paragraph 5 c of Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report. Note that the proposals were published by the OPC but do not necessarily reflect their opinion.

33. See p. 14 of L'Intelligence artificielle en finance : Recommandations pour une utilisation responsable.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More