ARTICLE
7 February 2025

Akin Intelligence - November/December 2024

AG
Akin Gump Strauss Hauer & Feld LLP

Contributor

Akin is a law firm focused on providing extraordinary client service, a rewarding environment for our diverse workforce and exceptional legal representation irrespective of ability to pay. The deep transactional, litigation, regulatory and policy experience we bring to client engagements helps us craft innovative, effective solutions and strategies.
Welcome to the November-December edition of Akin Intelligence. As the end of 2024 approached, the European Union (EU) sustained progress on guidance and supporting materials for its AI Act and international AI Safety.
United States Technology

Welcome to the November-December edition of Akin Intelligence. As the end of 2024 approached, the European Union (EU) sustained progress on guidance and supporting materials for its AI Act and international AI Safety Institutes solidified their commitment to cooperation. Enforcement actions continued elsewhere, including the FTC's efforts to combat deceptive and misleading AI companies.

To ensure continued receipt, please subscribe to future issues here if you have not already done so. For past issues and other AI content, check out Akin's AI & ML Insights and AI Law & Regulation Tracker.

BIS Issues New Framework for AI Diffusion

On January 13, 2025, the Bureau of Industry and Security (BIS) published a Framework for Artificial Intelligence Diffusion that revises the Export Administration Regulations' (EAR) controls on advanced computing integrated circuits (ICs) and adds a new control on artificial intelligence (AI) model weights for certain advanced closed-weight AI models. This framework seeks to control the spread of advanced AI technology in a manner that promotes its potential economic and social benefits, while also protecting U.S. national security and foreign policy interests. The rule requires compliance starting 120 days after the rule is published in the Federal Register—which is expected on January 15, setting a May 15, 2025 compliance date.

Click here to read the full summary.

Commerce and State Launch International Network of AISIs

On November 20, 2024, the Departments of Commerce and State co-hosted the inaugural meeting of the International Network of AI Safety Institutes (AISIs). The United States will serve as the inaugural chair, and the other initial members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore and the United Kingdom. The convening is structured as a technical working meeting to address three topics: managing risks from synthetic content, testing foundation models and conducting risk assessments for advanced AI systems.

BIS Announces New Semiconductor Manufacturing Equipment Rule

On December 2, 2024, the U.S. Department of Commerce, Bureau of Industry and Security (BIS) announced an interim final rule that significantly revises controls on advanced computing and semiconductor manufacturing items (the SME Rule). The SME Rule is highly complex and intended to inhibit China's ability to develop an indigenous semiconductor ecosystem, including capabilities to manufacture advanced semiconductors, and to slow the PRC's development of advanced AI. The SME Rule clarifies, inter alia, export restrictions applicable to software license keys. It states that software license keys, which allow users to use software or hardware that is "locked" and unusable without a license key, are classified and controlled under the same Export Control Classification Number as the corresponding software to which they provide access—or in the case of hardware, the corresponding software group. Another key aspect of the SME Rule is its addition of new controls and a corresponding license exception for high bandwidth memory (HBM), which is found in most advanced semiconductors that power advanced AI models. These controls will impact both HBM stacks and semiconductors that contain HBM stacks. BIS notes that "[a]ll HBM stacks currently in production" exceed the memory bandwidth density threshold specified in the new rule.

USAISI Establishes Task Force to Test AI Models

On November 20, 2024, the U.S. Artificial Intelligence Safety Institute (USAISI) at the Department of Commerce's National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce. This Taskforce will work to coordinate research and testing of advanced AI models across critical national security and public safety domains. The Taskforce is comprised of officials from the Departments of Defense, Energy, Homeland Security, and Health and Human Services.

NIST Releases Materials for ARIA Workshop

On November 7, 2024, NIST released information in preparation for a workshop kicking off its "Assessing Risks and Impacts of AI" (ARIA) program. The information included exercises allowing participants to help shape upcoming pilot projects for testing the reliability, safety, security and privacy of AI systems before being deployed. The workshop was held on November 12, 2024.

DHS Releases AI Deployment Framework

On November 14, 2024, the U.S. Department of Homeland Security (DHS) and its Artificial Intelligence Safety and Security Board (AISSB) published its Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure. The framework focuses on safeguarding America's critical infrastructure, identifying AI-related risks and providing guidance to manage risks as AI systems are developed, accessed and integrated within larger systems. In summary, the framework:

  • Identifies roles for AI safety in critical infrastructure: cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civil society and the public sector.
  • Proposes voluntary responsibilities for those roles: responsible design, data governance, safe & secure deployment and ongoing monitoring.
  • Offers technical and process improvements for AI safety and trustworthiness across 16 critical infrastructure sectors, including Communications, Energy and Public Health.

This framework seeks to further AI safety and security in critical infrastructure by harmonizing safety and security practices, improving the delivery of critical services, enhancing trust and transparency, protecting civil rights and liberties, and advanced AI safety and security research.

CDAO Announces Edge Data Integration Services Award

On December 3, 2024, The Department of Defense's Chief Digital and Artificial Intelligence Office (CDAO) announced an award for a production agreement for edge data integration services, a data mesh solution that improves interoperability among legacy systems. The award was announced under CDAO's Open Data and Applications Government-owned Interoperable Repositories, or Open DAGIR, multi-vendor ecosystem. The edge data integration services will provide an open government-owned data infrastructure that supports third-party data and applications via open Application Programming Interfaces.

Senators Ask GAO About AI Export Controls

On December 3, 2024, Senators Peter Welch and Ron Wyden sent a letter to the Government Accountability Office (GAO), focusing on the effectiveness of U.S. export controls on AI technologies with an emphasis on human rights risks. The letter requests the GAO to assess these controls, particularly their ability to prevent misuse by adversaries and foreign governments, such as using AI for surveillance and potential human rights abuses. The letter poses eight questions including the current scope of export controls, controls ensuring adherence to international humanitarian law (IHL) & international human rights obligations, and identifying legislative gaps that need to be addressed. The Senators stress the importance of balancing U.S. leadership in AI innovation with the mitigation of associated risks.

Congressional Action

Sen. Welch Introduces Bill to Increase Transparency of AI Training Data

On November 21, 2024, Sen. Peter Welch (D-VT) introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act (S. 5379), which seeks to protect copyright owners by allowing them to request information on training data from generative AI developers and deployers in order to determine if their copyrighted works were used in a model. The bill would create a new subpoena mechanism under the copyright statute and create a rebuttable presumption that works were copied for noncompliance with the subpoena. Several organizations in the creative community have voiced support for the bill, including the American Society of Composers, Authors and Publishers (ASCAP), Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and Universal Music Group. The introduction of this bill follows a recent lawsuit brought by several major record labels alleging copyright infringement by AI music generator model owners Suno and Udio.

Health Care

FDA Digital Health Advisory Committee Addresses Generative AI

On November 20 & 21, 2024, the U.S. Food and Drug Administration (FDA) held its inaugural Digital Health Advisory Committee meeting, which focused on whether and how generative AI should be regulated by the FDA. FDA and manufacturers discussed how generative AIenabled medical devices pose unique challenges, including the adequacy of data sets used to test and evaluate AI once in the real world and the difficulty in providing FDA with a clear understanding of foundational models for devices that rely on generative AI. The agency has not yet cleared a medical device that uses generative AI but has authorized 950 AI/machine learning (ML)-enabled medical devices as of August 2024. The FDA noted that the agency may require special controls unique to generative AI-enabled devices, including requirements for post-market monitoring.

AdvaMed Releases AI White Paper

On November 11, 2024, AdvaMed, the world's largest medical technology association representing device, diagnostics and digital technology companies, released a white paper that reviews the current landscape of AI-based applications and products in the health care sector, and identifies steps to accelerate the use of AI in medical technologies. The trade association highlights two main types of tasks that AI is uniquely well suited to tackle: (1) identifying and analyzing patterns in patient charts that practitioners might miss; and (2) automating repetitive routine tasks. AI is being incorporated into a range of technologies in the health care sector, and AdvaMed's white paper focuses primarily on AI/ML-enabled medical devices, which are regulated by FDA. AdvaMed anticipates that the FDA will likely need to issue additional guidance to keep pace with development of AI models, including for adaptive models and approaches to mitigating bias. AdvaMed endorses FDA's use of "Predetermined Change Control Plans" (PCCPs), which permit manufacturers to outline approaches to future modifications as part of an initial submission, and states that PCCPs should evolve to allow for greater post-market modifications for adaptive algorithms. The trade association also calls for domestic and international harmonization of requirements, including development of common AI standards to advance safe, secure and trustworthy use of AI.

NIH Develops AI Algorithm to Identify Potential Volunteers for Clinical Trials

On November 18, 2024, researchers from the National Institutes of Health (NIH) published a study on an algorithm that harnesses AI to help accelerate the process of matching potential volunteers for relevant clinical research trials. The algorithm, called TrialGPT, is intended to help clinicians navigate the vast range of clinical trials available to patients by identifying potential matches and providing a summary of how that person meets the criteria for study enrollment. The team of researchers used a large language model (LLM) to develop an innovative framework for TrialGPT and compared the algorithm to the results of three human clinicians who analyzed over 1,000 patient-criterion pairs. The team also conducted a pilot user study, where two human clinicians reviewed six anonymous patient summaries and matched them to potentially suitable clinical trials. When clinicians used TrialGPT as a starting point, they spent 40% less time screening patients and maintained the same level of accuracy. The research team was selected for the Director's Challenge Innovation Award, which will allow the team to further assess the model's performance and fairness in realworld clinical settings. The researchers "anticipate that this work could make clinical trial recruitment more effective and help reduce barriers to participation for populations underrepresented in clinical research."

FDA Finalizes Guidance for AI-Enabled Medical Devices

On December 4, 2024, the Food and Drug Administration (FDA) issued final guidance to provide recommendations for predetermined change control plans (PCCPs) tailored to AIenabled device software functions, applicable to De Novo, premarket approval (PMA), and premarket notification (510(k)) pathways. FDA recognizes that development of AI-enabled devices is an iterative process, and PCCPs are intended to allow developers to plan for modifications, while continuing to provide a reasonable assurance of safety and effectiveness. FDA's guidance provides that a PCCP should include planned modifications; a methodology to develop, validate, and implement those modifications; and an assessment of an impact of those modifications. This guidance builds upon concepts for AI and machine learning that FDA initially introduced in 2019.

CMS Proposes Rules to Ensure AI Does Not Impede Equitable Access to Services

On December 10, 2024, the Centers for Medicare & Medicaid Services (CMS) proposed to require Medicare Advantage (MA) organizations that use AI to ensure that such use is equitable. Specifically, the regulation would require MA organizations to "ensure services are provided equitably irrespective of delivery method or origin, whether from human or automated systems." The proposed regulation explicitly states that if AI or automated systems are used, they "must be used in a manner that preserves equitable access to MA services." CMS noted that compliance with the proposal could include limiting the impact of biased data inputs in AI, implementing a process to regularly ensure AI use is nondiscriminatory and not using outputs with a known discriminatory bias. MA organizations are already prohibited from discriminating against enrollees based on their health status, and CMS does not expect that this proposed regulation would impose new burdens on MA organizations

To view the full article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More