Welcome to the inaugural edition of "Akin Intelligence," a newsletter featuring artificial intelligence (AI) updates on a wide range of fronts: state and federal regulatory developments, incoming legislation and other policy updates, key litigation, moves from AI developers and advocates, international developments and more.

Akin's cross-practice AI team provides a full spectrum of expertise on this world-changing, fast-moving technology. With business and government around the world pivoting toward AI, the one certainty is that fascinating and revolutionary change is on the way. Akin Intelligence will help you stay on top of the latest news and developments so that you can keep track of how the AI revolution is affecting your industry, government and future.

You can subscribe to future issues of Akin Intelligence here.

This edition of Akin Intelligence covers AI developments from January to April 2023, as well as a few important items from late 2022. Going forward, Akin will publish Akin Intelligence on a monthly basis. You are receiving this initial edition of Akin Intelligence because you've subscribed to other Akin updates on related topics. We hope you find Akin Intelligence to be a useful resource, and we welcome any feedback.

In this issue

  • U.S Federal Action
  • Congressional Action
  • State Action
  • Judicial Action
  • Industry Action
  • Updates from China
  • U.K. Updates
  • EU AI Act Update 
  • Akin Thought Leadership

U.S. Federal Action

International Agreements on AI

The United States has launched two new international partnerships on AI:

  • On January 27, 2023, the United States and the European Union (EU) signed an administrative agreement to enhance the use of AI to improve agriculture, health care, emergency response, climate forecasting and the electric grid. The initiative is intended to give officials greater access to detailed and data-rich AI models, prompting more efficient emergency responses and electric grid management.
  • On January 31, 2023, the United States and India announced they would be launching the U.S.-India initiative on Critical and Emerging Technology (iCET). The partnership would expand collaboration on AI and develop common standards and benchmarks for trustworthy AI.

Mandating Equitable Federal AI

On February 16, 2023, President Joe Biden signed an Executive Order (E.O.) on racial equity which includes provisions on equitable data and algorithmic discrimination in AI. The order mandates that agencies design, develop, acquire and use AI in the federal government in a manner that advances equity, and that agencies consider consulting their civil rights offices on decisions regarding the design, development, acquisition and use of AI.

Request for Comments on AI Accountability Measures

On April 11, 2023, the Department of Commerce released a request for comment on AI accountability measures, including whether potentially risky new AI models should go through a certification process before they are released. The request "focuses on self-regulatory, regulatory, and other measures and policies designed to provide reliable evidence to external stakeholders" or provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy." Written comments are due on or before June 12, 2023. The action follows legislative directives to "support the advancement of trustworthy AI" from the National AI Initiative Act of 2020 and the CHIPS and Science Act of 2022.

Administration Considers AI Risks and Opportunities

The Federal Trade Commission (FTC), U.S. Department of Justice (DOJ) and the U.S. Equal Employment Opportunity Commission (EEOC) on April 25, 2023, released a joint statement outlining a commitment to further enforcement efforts against discrimination and bias in AI systems. Earlier in the month, FTC Chair Lina Khan stated that the agency is aiming to ensure the AI field is not dominated by existing big tech companies. Chair Khan noted that, as AI and machine learning necessitate huge amounts of data and storage, there is a potential for "big companies to become bigger." In addition, the FTC also cautioned companies to ensure their AI products' capabilities are not "over[sold] or overstate[ed]." Khan's statement complements the existing FTC investigation into competition and data security in the cloud computing industry and previous FTC publications regarding company claims in relation to AI capabilities.

On April 4, 2023, President Biden met with the President's Council of Advisors on Science and Technology (PCAST) about the "risks and opportunities" that rapid advancements in AI pose for individual users and U.S. national security. President Biden emphasized the opportunities for AI assistance in addressing disease and climate change, but also noted the importance of addressing potential risks to society, national security and the economy. President Biden highlighted the role of the private sector in introducing reliable and safe AI, noting that "tech companies have a responsibility, in my view, to make sure their products are safe before making them public."

New AI Standards and Guidance

  • On January 26, 2023, the National Institute for Standards and Technology (NIST) released version 1.0 of an AI Risk Management Framework (RMF). This framework is voluntary, but provides a roadmap for how companies can get ahead of potential AI regulatory and governance issues while maximizing AI's benefits. It lays out key guiding principles and characteristics of "trustworthy AI," and suggests seeking broad and diverse input to help identify and combat potential bias.
  • In October 2022, the White House Office of Science and Technology Policy issued a Blueprint for an AI Bill of Rights that asserts principles and guidance around equitable access and use of AI systems. The Blueprint identified non-binding core principles to guide and govern the design, use and deployment of AI systems, focusing on the potential consequences of human rights abuses. It does not contain prohibitions against AI deployments or provide any enforcement mechanisms. The Blueprint also provides the private sector with an opportunity for self-regulation, particularly in relation to protecting consumer rights. Akin has previously covered the AI Bill of Rights here.
  • On January 24, 2023, the National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report. The report provides a detailed roadmap for producing a national research infrastructure that would broaden access to essential resources for AI research and development. The plan aims to provide AI researchers with access to computational resources, high-quality data, training tools and user support, and it calls for analysis-ready data sets to be defined utilizing existing, community-driven principles and standards. It also aims to designate a single federal agency to serve as the administrative home for NAIRR operations and set the standard for responsible AI research through the design and implementation of its governance processes. NAIRR has told Congress that it could reach initial operating capability within 21 months with an estimate of around $2.6 billion in appropriations.
  • On March 3, 2023, the Departments of the Treasury and Commerce provided reports to Congress describing plans under consideration to regulate outbound investment on sensitive technologies, including AI, as required by the 2023 Consolidated Appropriations Act. Administration officials previously stated that a "handful" of "extremely sensitive" sectors should be covered by an outbound investment regime, including a subset of artificial intelligence applications. The reports identify ongoing efforts and the necessary resources to ensure clear definitions and scoping of such a regime. The goal of any proposed outbound investment regime is to prevent exploitation of U.S. capital in ways that threaten national security without unduly burdening U.S. investors.
  • During the Second Summit for Democracy on March 29, 2023, the White House released a fact sheet on "Advancing Technology for Democracy." The fact sheet urged democracies to align the development of AI with respect for democratic principles, human rights and fundamental freedoms. The summit also launched the Trustworthy and Responsible AI Resource Center for Risk Management, which is designed to enable responsible use of AI.
  • On March 30, 2023, the Food and Drug Administration (FDA) issued draft guidance on Marketing Submission Recommendations for a Predetermined Change Control Plan for AI/ML-enabled Device Software Functions. The draft guidance interprets a new law authorizing Predetermined Change Control Plans (PCCPs) for medical devices. PCCPs authorized by FDA will allow medical device manufacturers to make changes to an approved or cleared device, consistent with the PCCP, which would otherwise require submission of a new application to FDA. This draft guidance provides recommendations to include in PCCPs for medical devices that feature machine learning-enabled device software functions (ML-DSFs).

Funding for AI Research

On March 9, 2023, the Biden administration released the National Cybersecurity Strategy, which aims to reinvigorate federal research and development for cybersecurity by driving investment in the security of AI, among other sectors. The strategy emphasized that research, development and demonstration (RD&D) investments into these technologies will "prove decisive for U.S. leadership in the coming decade" and that RD&D efforts "will facilitate the proactive identification of potential vulnerabilities" and mitigation research. 

On March 9, 2023, President Biden released his budget request for FY2024, which aims to allocate nearly $25 billion in discretionary funding to research related to the CHIPS and Science Act, encompassing subjects including AI, among others. Many of the AI applications will be helmed by the National Science Foundation (NSF), which will oversee broad research and development efforts. The administration is also eager to include AI systems in digital identity software to improve government services.

Congressional Action

Senate Democratic Leadership Legislative Framework

On April 13, 2023, Senate Majority Leader Chuck Schumer (D-NY) announced his work with stakeholders on a new legislative framework to regulate AI, combined with bolstered oversight efforts. The effort, expected to span across multiple congressional committees, is centered on four guardrails: "Who," "Where," "How" and "Protect." The first three guardrails aim to "inform users, give the government the data needed to properly regulate AI technology, and reduce potential harm," while the final guardrail "will focus on aligning these systems with American values and ensuring that AI developers deliver on their promise to create a better world." Staff from the Majority Leader's office has compared the push to efforts last Congress to pass the CHIPS and Science Act (P.L. 117–167). Leader Schumer expects to refine the AI framework through conversations in the coming weeks with industry, government officials, academics and advocacy groups.

National Security Legislation

Reps. Jay Obernolte (R-CA) and Jimmy Panetta (D-CA) reintroduced the AI for National Security Act (H.R. 1718), which clarifies and codifies the U.S. Department of Defense's (DoD) authority to procure AI-based endpoint security tools in order to improve cyber defenses of their systems.

New Legislation, Brought to You by AI

In January, Rep. Ted Lieu (D-CA) introduced the first ever piece of federal legislation written by AI. Using ChatGPT, Rep. Lieu offered the following prompt: "You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI." The resulting resolution introduced—H. Res. 66—is also the first AI-focused bill introduced this Congress. This Congress, Rep. Lieu has been appointed to the House Science, Space and Technology Committee, and he has stressed that he will utilize his position on the panel to ensure AI is regulated in a "responsible and ethical manner."

New Committee Hearing on Competitiveness with China

At the beginning of February, the House Energy and Commerce (E&C) Committee held a hearing focused on technology competition with China. During the hearing, Innovation Subcommittee Chair Gus Bilirakis (R-FL) specifically voiced concern about China's investments in AI and other emerging technologies, outlining the need for action in Congress to address the issue and enact legislation to facilitate innovation. A full summary of the hearing is attached.

Federal Funding for AI Institute

Senate Majority Leader Chuck Schumer (D-NY), Sen. Kirsten Gillibrand (D-NY) and Rep. Brian Higgins (D-NY) recently announced that the University at Buffalo will receive $20 million to establish a National Artificial Intelligence Research Institute for transforming education for children with speech and language processing challenges, marking the first federal AI institute awarded this year. The funding is part of the $500 million provided in the omnibus for implementation of the Regional Technology and Innovation Hub Program created in the CHIPS and Science Act of 2022 (P.L. 117-167).

Congressional Letters

Near the end of January, House Science, Space and Technology Committee Chair Frank Lucas (R-OK) and House Oversight and Accountability Chair James Comer (R-KY) sent a letter to the White House Office of Science and Technology Policy (OSTP) raising concerns about conflicting administration guidance on AI. The letter referenced a December 2022 call between the pair's staff and OSTP, requesting written responses to a number of questions, including whether OSTP coordinated with NIST or the National Artificial Intelligence Advisory Committee (NAIAC) in developing its AI blueprint.

On April 19, 2023, Sens. John Hickenlooper (D-CO) and Marsha Blackburn (R-TN), Chair and Ranking Member of the Senate Commerce Committee's Subcommittee on Consumer Protection, sent a letter to technology associations asking how they will implement the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (RMF), including with regard to deploying safe and transparent AI systems.

Sen. Michael Bennet (D-CO) in March sent a letter to the CEOs of OpenAI, Snap, Alphabet, Microsoft, and Meta raising concerns about the potential harm to children in the push to integrate generative AI in the companies' products and services. The letter poses a number of questions, including with regard to the companies' planned safety features for younger users engaging with "AI-powered chatbots."

Congressional Committees Convene Oversight Panels

On March 8, 2023, the House Oversight Committee held a hearing to examine advances in AI. During the hearing, witnesses called for regulatory guardrails and public education on rapid AI advancements.

Over the last month, the Senate Armed Services Committee's (SASC) Cybersecurity Subcommittee convened two hearings featuring AI discussion, including a hearing to examine the state of AI and machine learning applications to improve U.S. DoD operations. During the hearing, Chair Joe Manchin (D-WV) and Sen. Mike Rounds (R-SD) outlined the need to examine legislative solutions to ensure cybersecurity protections in AI platforms and to set guidelines for how DoD uses AI. To inform future legislation, the lawmakers asked witnesses testifying—including those from Palantir, Shift5, and RAND Corporation—to share related recommendations "as quickly as possible — 30 to 60 days."

U.S. Senator Introduces Bill Targeting AI's Shortfalls

On April 28, 2023, Sen. Michael Bennet (D-CO) introduced the Assuring Safe, Secure, Ethical, and Stable Systems for AI (ASSESS AI) Act  (S. 1356), which would establish a cabinet-level AI Task Force tasked with examining gaps in the federal government's AI policies and uses and providing related policy recommendations. The Task Force's membership would include the U.S. Attorney General; the directors of the National Institute of Standards and Technology (NIST) and the White House's Office of Science and Technology Policy (OSTP); and representatives from industry, academia, and nonprofits.

State Action

California

A number of AI-focused bills were advanced by the Senate Judiciary Committee on April 19, 2023, including:

  • Assembly Bill 331: This bill would add regulations relating to AI and impose obligations on employers to evaluate the impact of autonomous decision making, provide notice, and expressly prohibits employers from using autonomous decision making in ways that contribute to algorithmic discrimination.
  • Senate Bill 313: This bill would establish a new Office of Artificial Intelligence within the California Department of Technology to guide the design, use and deployment of automated systems and ensure that AI systems are deployed in a manner that is consistent with state and federal laws and regulations regarding privacy and civil liberties.
  • Senate Bill 721: This bill would create a California Interagency AI Working Group.

Further, the California Privacy Protection Agency (CPPA) has issued an invitation for and received preliminary comments on the following topics that will be included in their future rulemaking: cybersecurity audits, risk assessments and automated decision making.

Colorado

The Colorado Division of Insurance (DOI) released its draft "Algorithm and Predictive Model Governance Regulation" in February, which would require state-licensed life insurance companies to detail their inventory of AI models, create governance principles for those systems and mandate transparency reports disclosing how the models have been tested to limit bias. The comment period for the rules, which are poised to be the first in the nation to govern AI, closed on March 7, and a final draft is expected in the coming days.

Connecticut

In Connecticut, Senate Bill 1103 has advanced through committee. The bill would establish an Office of Artificial Intelligence to oversee the use of AI systems by the state and create an AI Bill of Rights outlining ethical practices for the development and deployment of the systems.

New York

On April 6, 2023, the New York City Department of Consumer and Worker Protection (DCWP) announced that enforcement of the impending law regulating the use of automation and AI in employment decisions was postponed for a second time. The final rule will go into effect on May 6, 2023, and DCWP will begin enforcement on July 5, 2023. The law prohibits employers and employment agencies from using an automated employment decision tool, unless the tool has been subject to a bias audit within one year of use.

Texas

On April 19, 2023, House Bill 2060 was approved by the House, sending the bill to the Senate. The legislation would create an AI Advisory Council—composed of seven members—to study and monitor AI systems developed, employed or procured by state agencies.

Judicial Action

USPTO & AI Inventorship

On April 24, 2023, the U.S. Supreme Court denied review of the U.S. Court of Appeals for the Federal Circuit's decision in Thaler v. Vidal, which held that an "inventor" under the United States Patent Act must be a human being, and that an artificial intelligence ("AI") system could not be named as an inventor on a patent application.  Computer scientist and AI researcher Stephen Thaler petitioned the Supreme Court in March for review of the Federal Circuit's decision, arguing that the Patent Act simply defines an inventor as one who invents, and therefore patent protection should extend to inventions made by an AI system.  The Supreme Court's denial of certiorari confirms that the Federal Circuit's decision requiring inventors to be human will remain the current law in the United States.

Microsoft, GitHub, Open AI & Licensed Code

In January, Microsoft, GitHub and OpenAI were sued in federal court in relation to Copilot. According to the complaint, Copilot is "an AI-based product that promises to assist software coders by providing or filling in blocks of code using AI." The complaint, filed as a class action on behalf of authors of code available on GitHub, also alleges that the training data for Copilot "includes data in vast numbers of publicly accessible repositories on GitHub," and alleges violations of the Digital Millennium Copyright Act (DMCA), Lanham Act, unfair competition, the California Consumer Privacy Act (CCPA) and a breach of GitHub's terms of service regarding licenses and privacy. The complaint alleges that AI programs violate GitHub's requirement to display copyright information when making use of the original code because programs using Copilot may generate code that is identical to licensed code used to train Copilot. Microsoft and OpenAI have moved to dismiss the complaint.

Midjourney, Stability AI, DeviantArt & Copyrighted Images

A class action complaint against companies Midjourney Inc., Stability AI and DeviantArt alleges direct and vicarious copyright infringement, violation of the DMCA, rights to publicity, unfair competition and a breach of DeviantArt's terms of service. According to the complaint, the training of AI on publicly available but copyrighted data and images impermissibly exceeds fair use as allowed under copyright law.

In addition to the above class action complaint, Getty Images, Inc. separately filed suit in Delaware against Stability AI for infringement of Getty Images' intellectual property rights. According to the complaint, Stability AI's unlicensed usage of over 12 million copyrighted images violates the Copyright Act, the Lanham Act, and Delaware's trademark and unfair competition laws. The complaint also alleges that images produced by Stability AI reproduce Getty Images' watermark, damaging Getty Images' trademark, and produce highly derivative content that itself might violate copyright.

Google & Section 230 Scope

On February 21, 2023, the Supreme Court heard Gonzales v. Google LLC, a case related to the application of section 230 of the Communications Decency Act. Section 230 generally shields technology platforms from legal liability stemming from third-party content published on their sites. Plaintiffs argued that the recommendation algorithm on YouTube falls outside the scope of section 230 as the algorithm represents an interactive service that provides targeted recommendations to users, an action that exceeds the conventional editorial functions (such as simply displaying or hosting information) envisioned by the Act.

Industry Action

Event: Rep. Auchincloss on the Role of AI

On February 16, 2023, the Hill held a virtual discussion focusing on AI in the workforce. The event featured remarks from Rep. Jake Auchincloss (D-MA), who pointed to a number of ethical concerns with regard to AI, including its ability to amplify and exacerbate disinformation. He voiced concern that participation in the industry thus far has been dominated by "Big Tech," outlining the need for universities, civil society, journalists, startups and government officials to have access to the same algorithmic development and scale and quantity of data as Big Tech companies. Rep. Auchincloss also expressed support for funding to support a version of a "public cloud," as outlined by the NAIRR Task Force in its final report.

Policy Priorities/Letters

In February, the U.S. Chamber of Commerce filed a series of Freedom of Information Act (FOIA) requests to six federal agencies on the OSTP's Blueprint for an AI Bill of Rights. The letters seek additional information on the extent of stakeholder involvement in the development of the blueprint from the Consumer Financial Protection Bureau (CFPB), Department of Education, Equal Employment Opportunity Commission (EEOC), FTC, Department of Health and Human Services (HHS) and National Telecommunications and Information Administration (NTIA).

In January, a number of tech trade groups unveiled their 2023 policy agendas, including BSA, whose 2023 policy agenda underscores the need for the U.S. to require impact assessments for high-risk uses of AI.

The Information Technology Industry Council's (ITI) annual agenda calls on the Biden administration and Congress to accelerate the implementation of the NIST AI Risk Management Framework as the "preeminent tool for managing AI risks and increasing the trustworthiness and adoption of AI." It also outlines the need for lawmakers to fund AI research and development, maximize the government's use of AI and avoid prescriptive mandates that inhibit AI innovation.

Coalition Unveils Health Care Blueprint

The Coalition for Health AI, whose membership includes Google and Microsoft, recently published a blueprint for a trustworthy and ethical implementation of AI systems. In particular, the blueprint calls for continuous monitoring of AI systems, as well as for software developers to take steps to mitigate bias and protect privacy.

Companies Unveil New AI Tools

On April 13, 2023, Amazon Web Services announced new tools for building with generative AI. The company's new service, Amazon Bedrock, makes foundation models from AI21 Labs, Anthropic, Stability AI and Amazon accessible via an API.

On March 21, 2023, Google released its AI chatbot, Bard, to allow the public to collaborate with generative AI. The company stressed that its work on Bard is guided by its AI Principles, with a continued focus on quality and safety. 

Microsoft also recently announced the launch of Microsoft Security Copilot, which aims to serve as a tool to quickly detect and respond to cyber threats. The company notes that the tool will provide access to the "most advanced OpenAI models to support demanding security tasks and applications."

Stanford Institute for Human-Centered Artificial Intelligence Releases 2023 AI Index Report

High-level takeaways include:

  • Data on U.S. job postings indicates that "employers in the United States are increasingly looking for workers with AI-related skills."
  • "Organizations that have adopted AI report realizing meaningful cost decreases and revenue increases."
  • An analysis of legislative records in 127 countries indicates that "the number of bills containing 'artificial intelligence' that were passed into law grew from just 1 in 2016 to 37 in 2022." 
  • The number of incidents and controversies involving the "ethical misuse" of AI increased 26x between 2012 and 2022.

Updates from China

AI in the Judiciary

On December 9, 2022, the Supreme People's Court issued the "Opinions on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Fields." This document indicates that the Chinese judicial branch will improve its functional process for incorporating AI functionality by 2025, and will build AI applications into the judicial process by 2030.

Narrower Export Restrictions

On December 30, 2022, the Ministry of Commerce published a draft amendment to the Catalogs of Technologies Prohibited and Restricted from Export by China  to seek public comments by January 28, 2023. The draft amendment revises the description of "restricted artificial intelligence interactive interface technology" from "artificial intelligence interactive interface technology (including speech recognition technology, microphone array technology, voice wake-up technology, interactive understanding technology, etc.)" to "artificial intelligence interactive interface technology for Chinese and minority languages." Industry in China views this amendment as narrowing down the export restrictions on AI interactive interface technology, although the draft amendment may change before the final version is published.

Robotics & AI

On January 18, 2023, the Ministry of Industry and Information Technology, Ministry of Finance and 15 other departments jointly issued the Implementation Plan for "Robot +" Application Actions. This plan identifies government efforts to promote the application of various robot-related technologies, including AI technologies. The plan indicates that the government will promote the use of AI technologies in health care, pension service and education industries, including the use of AI technologies in medical research and medical diagnosis.

Lower Listing Threshold for High-Tech Startups

On February 17, 2023, the Shenzhen Stock Exchange issued the Notice on Matters Concerning the Listing of Unprofitable Enterprises on the ChiNext  to allow unprofitable start-ups in certain high-tech industries to go to listing on A-Share with a lower listing threshold. ChiNext is a NASDAQ-style subsidiary of the Shenzhen Stock Exchange. This new listing rule will cover enterprises in AI, advanced manufacturing, internet, big data, cloud computing and biomedicine industries. The new listing threshold is an "estimated market value . . . not less than 5 billion RMB, and . . . operating income in the latest year [of] not less than 300 million yuan." The new rule is intended to make listing on ChiNext more attractive to high-tech enterprises.

U.K. Updates

On April 24, 2023, the United Kingdom announced that it would devote £100 million to a Foundation Model Taskforce charged with developing "safe and reliable" foundation AI models meant to rival AI technologies like ChatGPT. The U.K. government has stated that the models will "build the UK's 'sovereign' national capabilities" in AI and be used "across the economy", ensuring the U.K.'s place as a "science and technology superpower by 2030". In particular, the announcement cites areas like health care and education as sectors that could be "transform[ed]" by the Taskforce's AI, predicting that the wide adoption of AI could "raise global GDP by 7 percent over a decade, making its adoption a vital opportunity to grow the UK economy". The Taskforce is expected to launch pilots targeting public services within the next six months. 

Following the U.K.'s recently published White Paper outlining proposals for a regulatory framework on AI, on April 11, 2023, the U.K.'s Information Commissioner's Office (ICO) released its response to the Paper.  In particular, the response urged the government to involve regulators like the Digital Regulation Cooperation Forum (DRCF) in producing joint guidance between the U.K.'s various regulatory bodies; to ensure that the White Paper's AI principles "are interpreted in a way that is compatible with the [U.K.'s] data protection principles"; and to design its "joint regulatory sandbox" such that it could effectively "respond to the needs of digital innovators". 

EU AI Act Update

The EU continues to iterate on its "AI Act" which, when implemented, will constitute the first comprehensive AI regulation by any major economy. On March 7, 2023, EU negotiators agreed on a definition of AI that closely follows the OECD's own adopted definition. Under the deal, AI will be defined as "a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments." A week later, on March 14, 2023, leading EU lawmakers proposed a set of significant obligations for providers of general purpose AI tools. Under the proposal, these providers would have to follow rigorous data governance measures, comply with EU risk management requirements in their design and testing of AI systems, undergo external audits and more.

More recently, EU Parliament negotiators have reportedly agreed to amend the AI Act to require that developers of AI tools disclose any copyrighted material used in building their systems.

Akin Thought Leadership

Federal AI Developments: Leader Schumer Unveils AI Legislative Framework, Reintroduction of AI for National Security Act and FTC Interest  (April 25, 2023)

UK Government Proposes New AI Regulatory Regime (April 14, 2023)

FDA Gets Digital, Agency Issues Digital Health Policies on PCCP, Cybersecurity and Drug Development  (April 4, 2023)

New AI Guidance: NIST Reveals First Version of AI Risk Management Framework (February 22, 2022)

TechSpeak Series: Artificial Intelligence and Machine Learning: Regulating AI and the Coming Legal Issues  (Janaury, 20, 2022)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.