- On November 19, the Senate Commerce Committee's Subcommittee on Consumer Protection, Product Safety, and Data Security convened a hearing on "Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams."
- The witnesses at the hearing testified about how AI technologies enable fraud and scams, while Senators from both parties asked questions that highlighted the need for federal laws to crack down on such activity.
- The hearing comes as Senators try to pass AI legislation during
the lame-duck session of this Congress. Subcommittee Chair
Hickenlooper (D-CO) specifically discussed five bipartisan AI bills
during the hearing that he vowed to get "across the finish
line and passed into law in the coming weeks."
On November 19, the Senate Commerce Committee's Subcommittee on Consumer Protection, Product Safety, and Data Security convened a hearing on "Protecting Consumers from Artificial Intelligence Enabled Fraud and Scams." The Subcommittee heard from witnesses who testified about how AI technologies and tools enable fraud and scams, while Senators from both parties asked questions that highlighted the need for federal laws to crack down on such activity.
The hearing comes as Senators try to pass AI legislation during the lame-duck session of Congress. Subcommittee Chair Hickenlooper (D-CO) specifically discussed five AI bills during the hearing, which he noted have all received bipartisan support and vowed to get "across the finish line and passed into law in the coming weeks."
Opening Statements
Subcommittee Chair Hickenlooper's opening remarks, while acknowledging the many benefits that AI brings, noted that "for all those benefits, we have to mitigate and anticipate the concurrent risks that this technology brings along with it." To this end, he specifically discussed five AI bills that he believes have bipartisan support, which we cover later in this newsletter, and vowed to get them "across the finish line and passed into law in the coming weeks."
Subcommittee Ranking Member Marsha Blackburn (R-TN) focused on AI scams and fraud and their now widespread impact. She noted that the FTC Consumer Sentinel Network Data Book found that "scams increased a billion dollars over the last 12 months, to $10 billion... And of course we know AI is what is driving a lot of this." To combat the fraud and scams driven in part by AI, Senator Blackburn called for an "encompassing" and "comprehensive" policy approach, including "an actual online privacy standard, which we've never passed."
The Chair and Ranking Member's statements both highlight that AI-enabled scams and other harms are a bipartisan point of concern.
Expert Testimony: Shared Concerns and Solutions
The following experts testified at the hearing:
- Dr. Hany Farid, an academic who studies deepfakes and other AI-generated or digitally manipulated images
- Justin Brookman, the Director of Technology Policy at Consumer Reports
- Mounir Ibrahim, the Chief Communications Officer and Head of Public Policy at Truepic, a company that provides digital authenticity technology
- Dorota Mani, the mother of a victim of an AI-generated
deepfake
The experts' testimonies and responses to the Senators' questions focused on four main themes:
- Content Provenance. Mr. Ibrahim as well as the
other panelists noted that content provenance – metadata that
is attached to content that reveals whether it is AI-generated or
not – is one of the most promising solutions we currently
have to help people differentiate AI-generated content from real
content. Senator Hickenlooper asked Mr. Ibrahim about the
incentives that exist to scale content provenance technologies and
make them widely available and used. Mr. Ibrahim responded that
"there have not been the financial incentives...or
consequences for these platforms to better protect or at least give
more transparency to their consumers."
- Comprehensive Privacy Laws. Both the
Subcommittee Chair and Ranking Member pointed to the need for
comprehensive data privacy laws in their remarks. Dr. Farid
stated,"It should be criminal that we don't have a data
privacy law in this country."
- Holding Creators of AI Content and AI Tools
Accountable. Several Senators, as well as the panelists,
discussed the need to shift the burden from consumers to companies
that produce AI content and AI tools to ensure that their content
or tools are not misused or harming people. "If you're an
AI company," testified Dr. Farid, "and you're
allowing anybody to clone anybody's voice by simply clicking a
box that says, 'I have permission to use their voice,
you're the one who's on the hook for this.'"
Senator Hickenlooper highlighted that the AI Research, Innovation,
and Accountability Act, as we've covered, would create "a
framework to hold AI developers accountable" for their content
and AI tools.
- Stronger Enforcement. Mr. Brookman noted that
while "fraud and scams are already illegal,"
"because of insufficient enforcement – or consequences
when caught – there is not enough deterrence against
potential scammers." He called on Congress to grant the FTC
additional resources to hire more staff and expand its legal powers
"to allow the agency to keep pace with the threats that plague
the modern economy."
Legislation Discussed at the Hearing
Five AI bills, which we've covered before, were discussed during the hearing. None of these bills would constitute the comprehensive data privacy laws that the Senators and panelists called for, but they would lay the groundwork for increased transparency for AI developers and also protect consumers from deepfakes and other AI-generated harmful content:
- The Future of Artificial Intelligence Innovation Act of
2024
- The Future of AI Innovation Act would
permanently establish the Artificial Intelligence Safety Institute,
which would create voluntary standards and guidelines for AI and
research AI model safety and issues. The Institute would also
create a test program for vendors of foundation models to test
their models "across a range of modalities." The bill
would also direct the National Institute of Standards and
Technology (NIST) and the Department of Energy to establish a
testbed for the discovery of new materials for AI systems.
- The Future of AI Innovation Act would
permanently establish the Artificial Intelligence Safety Institute,
which would create voluntary standards and guidelines for AI and
research AI model safety and issues. The Institute would also
create a test program for vendors of foundation models to test
their models "across a range of modalities." The bill
would also direct the National Institute of Standards and
Technology (NIST) and the Department of Energy to establish a
testbed for the discovery of new materials for AI systems.
- Validation and Evaluation for Trustworthy AI Act
- The VET Artificial Intelligence Act would require
the director of the NIST to develop "voluntary guidelines and
specifications for internal" and external artificial
intelligence assurance, which is the impartial evaluation of an AI
model by a third party to identify errors in the functioning and
testing of the model and verify claims about the model's
functionality.
- The VET Artificial Intelligence Act would require
the director of the NIST to develop "voluntary guidelines and
specifications for internal" and external artificial
intelligence assurance, which is the impartial evaluation of an AI
model by a third party to identify errors in the functioning and
testing of the model and verify claims about the model's
functionality.
- Artificial Intelligence Research, Innovation, and
Accountability Act
- Regarding research and innovation, the Artificial Intelligence Research, Innovation, and
Accountability Act (AIRIA) would direct the Secretary of
Commerce to conduct research on "content provenance and
authentication for human and AI-generated works" and the
Comptroller General to study "statutory, regulatory, and
policy barriers to the use of AI" within the federal
government. Regarding accountability, the bill would also create
standardized definitions of common AI terms; transparency
requirements for AI use, including disclosures that content is
AI-generated and disclosure and reporting obligations for
high-impact AI systems, including those involved in making
decisions related to housing, employment, credit, education, health
care, or insurance, among other requirements.
- Regarding research and innovation, the Artificial Intelligence Research, Innovation, and
Accountability Act (AIRIA) would direct the Secretary of
Commerce to conduct research on "content provenance and
authentication for human and AI-generated works" and the
Comptroller General to study "statutory, regulatory, and
policy barriers to the use of AI" within the federal
government. Regarding accountability, the bill would also create
standardized definitions of common AI terms; transparency
requirements for AI use, including disclosures that content is
AI-generated and disclosure and reporting obligations for
high-impact AI systems, including those involved in making
decisions related to housing, employment, credit, education, health
care, or insurance, among other requirements.
- The COPIED Act
- The Content Origin Protection and Integrity from
Edited and Deepfaked Media Act (The COPIED Act) would aim to
address the rise of deepfakes. The bill would direct federal
agencies to develop standards for AI-generated content detection,
establish AI disclosure requirements for developers and deployers
of AI systems, and prohibit the unauthorized use of copyrighted
content to train AI models.
- The Content Origin Protection and Integrity from
Edited and Deepfaked Media Act (The COPIED Act) would aim to
address the rise of deepfakes. The bill would direct federal
agencies to develop standards for AI-generated content detection,
establish AI disclosure requirements for developers and deployers
of AI systems, and prohibit the unauthorized use of copyrighted
content to train AI models.
- TAKE IT DOWN Act
- The Tools to Address Known Exploitation by
Immobilizing Technological Deepfakes on Websites and Networks
Act, known as the TAKE IT DOWN Act, would criminalize the
publication of non-consensual intimate imagery, including certain
AI-generated deepfakes, and also require social media platforms to
establish processes by which they remove such content from their
platforms.
- The Tools to Address Known Exploitation by
Immobilizing Technological Deepfakes on Websites and Networks
Act, known as the TAKE IT DOWN Act, would criminalize the
publication of non-consensual intimate imagery, including certain
AI-generated deepfakes, and also require social media platforms to
establish processes by which they remove such content from their
platforms.
Lame-Duck Period: Potential AI Activity
Whether there is enough momentum to get AI legislation across the finish line during the lame-duck Congress is an open question. As we've noted, lame-duck periods are complicated, particularly where control of the Senate will shift in the next Congress. The final weeks of Democratic control may incentivize Democratic Senators to act on AI, but AI legislation will compete with other legislative priorities. Furthermore, while all of the AI bills discussed during the Subcommittee hearing have bipartisan support, Republicans may want to wait until the next Congress begins with them in the majority in both chambers to address AI. While any prospects for passing these bills remain uncertain, the three weeks remaining on the Senate's calendar this Congress will bring clarity.
We will continue to monitor, analyze, and issue reports on AI legislation developments in the lame-duck Congress and the 119th Congress.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.