More and more, artificial intelligence (AI) and other automated systems make decisions affecting our lives and economy. These systems are not broadly regulated in the United States—although that will change this year in several states.

President Biden recently unveiled a blueprint1 for an "AI Bill of Rights," motivated by concerns about potential harms from automated decision-making. Arising from an initiative the White House Office of Science and Technology Policy (OSTP) launched in 2021,2 the AI Bill of Rights lays out five principles to foster policies and practices—and automated systems—that protect civil rights and promote democratic values.

For now, at least, adherence to these principles (and the steps3 recommended for observing them) remains voluntary—the blueprint is a guidance document with no enforcement authority attached to it.

Notably, at inception, OSTP was unsure how the AI Bill of Rights might be enforced:

Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this "bill of rights" or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices.4

The Biden administration decided to publish a nonbinding white paper, potentially recognizing the difficulty of shepherding legislation through any potential 118th Congress. Indeed document's first page proclaims that it "is non-binding and does not constitute U.S. government policy."5 Nor does it "constitute binding guidance for the public or federal agencies and therefore does not require compliance with the principles described herein."6

Notwithstanding this disclaimer, the blueprint provides a clear indication of the Biden administration's AI regulatory policy goals. The Executive Branch and also independent agencies are likely to follow this lead in their respective domains.

Issues of Definition

In the debate over the European Union's pending Artificial Intelligence Act, the definition of "artificial intelligence" has attracted much discussion. OSTP sidesteps this issue in the blueprint by addressing "automated systems," which are defined as "any system, software or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities."7 OSTP adds, "Automated systems include, but are not limited to, systems derived from machine learning, statistics or other data processing or AI techniques, and exclude passive computing infrastructure,"8 which OSTP also defines.

The blueprint's coverage of "automated systems" instead of "artificial intelligence" offers business a mixed bag. On the one hand, the broader scope aligns with the regulation of automated decision-making under California,9 Colorado,10 Connecticut,11 and Virginia12 privacy laws and New York City's law13 on automated employment decision tools, all taking effect this year, as well as Article 2214 of the EU/UK General Data Protection Regulation.

On the other hand, it potentially threatens international harmonization of regulations based on the seemingly narrower scopes of the UNESCO Recommendation on the Ethics of Artificial Intelligence15 and the OECD AI Principles16 (also shared by the G20).17

Much of the blueprint concerns protection of "rights, opportunities or access." OSTP explains this phrase as "the set of: civil rights, civil liberties and privacy, including":

  • "freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, an violations of privacy and other freedoms in both public and private sector contexts";
  • "equal opportunities, including equitable access to education, housing, credit, employment, and other programs"; or
  • "access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits."18

To view the full article click here


1. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House (October 4, 2022), https://www.white

2. Eric Lander & Alondra Nelson, "Americans Need a Bill of Rights for an AI-Powered World," Wired,

3. Blueprint for an AI Bill of Rights: From Principles to Practice, White House (October 4, 2022),

4. Eric Lander & Alondra Nelson, "Americans Need a Bill of Rights for an AI-Powered World," Wired, .

5. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House (October 4, 2022), https://www Rights.pdf at 2

6. Id.

7. Id. at 10.

8. Id.

9. California Business and Professions Code § 17940, https://leginfo∂=3.&chapter=6.&article= .

10. Colorado Senate Bill 21-190, .

11. Connecticut Public Act No. 22-15, .

12. Virginia Consumer Data Protection Act § 59.1,

13. New York City Code § 20-870,

14. Council Regulation 2016/679, Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 38, art. 22 (EU).

15. Recommendation on the Ethics of Artificial Intelligence, UNESCO (November 23, 2021),

16. Recommendation of the Council on Artificial Intelligence, OECD, OECD/LEGAL/0449 (May 21, 2019),

17. G20 Ministerial Statement on Trade and Digital Economy, G20 (June 9, 2019),

18. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House (October 4, 2022), at 10

Originally Published by The Journal of Robotics, Artificial Intelligence & Law

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.