On December 17, 2024, just before leaving town until the new session of Congress, the U.S. House of Representatives' Bipartisan Artificial Intelligence Task Force issued a "Report on Artificial Intelligence," which addresses (among other things) several key aspects of privacy and data security concerning artificial intelligence. The Report is the result of a process that began in February 2024, when Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on Artificial Intelligence to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.
Overall, the Report recognized the complex interplay between AI advancement and privacy/security concerns, advocating for a balanced approach that promotes innovation while protecting individual rights and national interests. The Report includes guiding principles and forward-looking recommendations that may be appropriate to advance America's leadership in AI innovation responsibly.
The Report is lengthy, coming in at 273 pages; so that you do not have to read the whole report, here's a summary of the main points discussed that relate to privacy and data security:
Data Privacy
The word "privacy" is mentioned 156 times in the report. In its discussion, the Report acknowledged that AI poses significant challenges to data privacy and made several key findings and recommendations:
- AI has the potential to exacerbate privacy harms.
- Americans currently have limited recourse for many privacy harms.
- New federal privacy laws could potentially augment existing state privacy laws.
To address these privacy concerns, the Report recommended:
- Exploring mechanisms to promote access to data in privacy-enhanced ways.
- Ensuring existing privacy laws are generally applicable and technology-neutral.
- Improving AI system design with privacy-by-design principles and utilizing new privacy-enhancing technologies.
Data Security
While not receiving a separate discussion in the Report, the term "cybersecurity" is mentioned 74 times in the Report, and data security concerns were interwoven throughout the Report's findings and recommendations, including:
- the federal government should improve system cybersecurity when it is adopting AI.
- secure access to data should be part of AI adoption.
- the AI cybersecurity market will reach $60.6 billion by 2028.
In making these recommendations, the Report noted ominously that: "Each year, the federal government spends over $100 billion on information technology and cybersecurity. Approximately 80% of this spending goes to operating existing legacy systems that are typically outdated and underpinned by archaic software and hardware components. These legacy systems create security and operational risks and are costly to maintain and remediate when incidents occur."
Government Use of AI
The Report made several recommendations regarding the use of AI by government agencies, which findings and recommendations, in turn, have implications for privacy and data security:
- "Irresponsible or improper use [of AI] fosters risks to individual privacy, security, and the fair and equal treatment of all citizens by their government."
- The Federal government should support and adopt AI standards to govern AI use.
- There should be efforts to reduce administrative burden for AI use in government.
- The Executive Branch should encourage and support data governance strategies.
National Security
The Report recognized AI as a critical component of national security, which inherently implicates data privacy and security considerations:
- Continue Congressional oversight over autonomous weapon policies.
- Support international cooperation on AI used in military contexts.
Sectoral Approaches
The Report advocated for a sectoral regulatory structure for AI, which would likely impact how privacy and data security are addressed across different industries.
Transparency and Human Oversight
While not exclusively applicable to privacy and security issues, the Report emphasized:
- Maintaining human oversight in AI deployment.
- Ensuring transparency in AI systems.
While the Report focuses more on identifying issues than solving them, there is some hope for the future from the fact that it was developed and issued by both the Republicans and Democrats in the U.S. House of Representatives. Whether this results in specific legislation that addresses privacy and data security issues in AI will be something to watch for in 2025.
Originally published 31 December 2024
To view Foley Hoag's Security, Privacy and The Law Blog please click here
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.