On May 23, 2023, the Biden-Harris Administration announced efforts to advance the research, development, and deployment of responsible artificial intelligence that "protects people's rights and safety." The announcement adds to the list of previous actions by the administration to promote AI innovation, including the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and a roadmap for standing up a National AI Research Resource.

The most recent announcements include:

  • An updated roadmap to focus federal investments in AI research and development ("R&D"): The White House Office of Science and Technology Policy (OSTP) released an updated National AI R&D Strategic Plan that outlines priorities and goals for federal investment in AI R&D. The plan outlines nine strategies to "underscore a principled and coordinated approach to international collaboration in AI research." These nine strategies include:
    • Making long-term investments in fundamental and responsible AI research, including efforts to make AI easier to use and more reliable.
    • Developing effective methods for human-AI collaboration.
    • Understanding and addressing the ethical, legal, and societal implications of AI, including by developing metrics and frameworks for verifiable accountability, fairness, privacy, and bias.
    • Ensuring the safety and security of AI systems, including research to advance the ability to test, validate, and verify the functionality and accuracy of AI systems, and to secure the systems from cybersecurity and data vulnerabilities.
    • Developing shared public datasets and environments for AI training and testing.
    • Measuring and evaluating AI systems through standards and benchmarks, including by developing a broad spectrum of evaluative techniques for AI.
    • Understanding the national AI R&D workforce needs.
    • Expanding public-private partnerships to accelerate advances in AI, including promoting opportunities for sustained investment in responsible AI R&D.
    • Establishing a principled and coordinated approach to international collaboration in AI research, including to address global changes such as environmental sustainability, healthcare, and manufacturing.
  • A request for input on AI issues: The OSTP published a request for input on national priorities regarding AI. Topics include protecting rights, safety, and national security, advancing equity and civil rights, bolstering democracy and civic participation, promoting economic growth, and innovating in public services. Interested members of the public and organizations are invited to submit comments by 5 pm on July 7, 2023.
  • A report on risks and opportunities related to AI in education: the US Department of Education's Office of Educational Technology released a new report, summarizing risks and opportunities related to AI in the educational space. The report makes seven recommendations for policy action:
    • Emphasize humans in the loop: The Department states that humans, including teachers, parents, families, students, policy makers, and systems leaders should examine the relevant "loops" for which they are responsible, analyze the role of AI in those loops, and determine steps to retain support for "the primacy of human judgment in educational systems."
    • Align AI models to a shared vision for education: The report highlights the important of "centering teaching and learning in all considerations about the sustainability of an AI model for an educational use," rather than prioritizing the advancement of AI as separate from educational goals.
    • Design modern learning principles: AI development should be grounded in modern principles of teaching and learning. For example, recent principles focus on collaborative and social learning, rather than an individualistic approach. As such, AI should be built with capabilities that allow for collaborative and social learning.
    • Prioritize strengthening trust: The report found that many stakeholders mistrust emerging technologies. It suggests that associations such as the State Educational Technology Directors Association and the Consortium for School Network work with educational leaders, teachers, etc. to "bring all parts of the educational ecosystem into discussions about trust."
    • Inform and involve educators: Stakeholders are concerned that AI would result in less respect for educators or less value for their skills. This could be addressed by informing and involving them in the steps of designing, developing, testing, improving, adopting, and managing AI-enabled tech.
    • Focus R&D on addressing context and enhancing trust and safety: The report states that AI-enabled systems should be designed with attention to the context students are in when utilizing the systems, including disabilities, language barriers, etc.
    • Develop education-specific guidelines and guardrails: Going forward, key student and children related laws such as FERPA, COPPA, and IDEA should be reconsidered as new situations arise in the use of AI-enabled technologies.


This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.