ARTICLE
14 May 2025

Key AI Developments To Watch This Year

PC
Perkins Coie LLP

Contributor

Perkins Coie is a premier international law firm with over a century of experience, dedicated to addressing the legal and business challenges of tomorrow. Renowned for its deep industry knowledge and client-centric approach, the firm has consistently partnered with trailblazing organizations, from aviation pioneers to artificial intelligence innovators. With 21 offices across the United States, Asia, and Europe, and a global network of partner firms, Perkins Coie provides seamless support to clients wherever they operate.

The firm's vision is to be the trusted advisor to the world’s most innovative companies, delivering strategic, high-value solutions critical to their success. Guided by a one-firm culture, Perkins Coie emphasizes excellence, collaboration, inclusion, innovation, and creativity. The firm is committed to building diverse teams, promoting equal access to justice, and upholding the rule of law, reflecting its core values and enduring dedication to clients, communities, and colleagues.

As we move further into 2025, the artificial intelligence (AI) landscape continues to evolve at a rapid pace; indeed, nearly every week seems to bring news of another major AI breakthrough.
United States California Technology

As we move further into 2025, the artificial intelligence (AI) landscape continues to evolve at a rapid pace; indeed, nearly every week seems to bring news of another major AI breakthrough.

In this post, we highlight the emerging AI-related business and legal developments that we will be closely monitoring over the course of this year and explore issues and opportunities raised by these developments.

The Mind-Boggling Surge in AI-Generated Content

AI-generated content has seen an exponential increase since the release of OpenAI's ChatGPT 3.5 in November 2022. One study found that approximately 57% of all text on the web has been generated or translated by AI. Nina Schick, a technology pundit, predicts that, by the end of this year, 90% of online content will be AI-generated. The business and legal ramifications of this trend are remarkable:

  • What will be the impact on Internet usage when so much online content is artificially generated and often of low quality? In a sea of so-called "AI slop," how will consumers find authentic, human-authored works? And will human creators be able to compete with an infinite wave of works generated by AI at little or no cost?
  • As content-generating AI tools become an alternative to online search engines, will consumers even need to spend as much time online? An AI tool can generate in an instant information and content that one used to have to track down online; what impact could this have on the many publishers who depend on Internet traffic for ad revenue?
  • Because the raw output of generative AI tools is unprotected by copyright in the United States and many other countries, more and more of the content that we encounter online will be copyright-free and available for anyone to use, modify and exploit, at least from a copyright law perspective. What will the impact be of this trend? Contract law will become the key means to protect such content – but note recent decisions in the Second Circuit and the Northern District of California finding state breach of contract claims preempted under the U.S. Copyright Act.
  • Section 230 of the Communications Decency Act contains the "26 words that created the Internet" – that is, the broad safe harbor immunizing interactive computer service providers and users from claims seeking to treat them as the publisher or speaker of information provided by "another information content provider." As websites and online platforms increasingly rely on AI-generated content to attract and engage users, a looming legal question is whether Section 230 provides any protection from claims arising from defamatory and other false information generated by AI tools.

Potential Decline in Quality of AI Output

As AI-generated content becomes ubiquitous, the quality of such content is a growing concern. A University of Oxford study found that when a generative AI tool is trained solely on generative AI content, the quality of the tool's output degrades significantly. This phenomenon, termed "model collapse," occurs after just a few cycles of AI learning from generative AI outputs. The study concludes that sustainable AI development requires a continuous influx of human-generated content. Without this, the risk of amplifying disinformation and errors increases, potentially corrupting previously unbiased training sets. But, if, as noted above, online human-authored content will be increasingly dwarfed by online generative AI content, how will this impact the value of generative AI tools?

  • Query whether databases containing only information and content created prior to the rise of generative AI might see a dramatic increase in value due to the model collapse problem.

Coming Plateau in AI Development?

For all of the stunning advancements in AI over the past two years, there are some warning signs that the pace of AI breakthroughs may be slowing down and perhaps even reaching a plateau. For example, OpenAI's latest model, Orion, reportedly shows only marginal improvements over its predecessors. One challenge for the industry is the scarcity of high-quality training data; as developers exhaust publicly available text sources, they are increasingly turning to AI-generated training data, which, as discussed above, can negatively impact quality. This has led some industry experts to suggest that new models are not achieving the same gains from scaling as previous iterations.

The Debate Over AGI

Whether and when Artificial General Intelligence (AGI) – that is, AI that matches or exceeds human capabilities across a broad range of cognitive tasks – will be achieved remains a contentious topic.

Although some AI experts contend that advanced large language models (LLMs) have already achieved AGI due to their ability to discuss a wide range of topics and perform diverse tasks, most feel that AGI has yet to arrive. For those researchers who believe AGI is possible, the estimated timeline ranges from as soon as two to five years to as distant as the end of this century. Other AI experts, such as leading scientist Yann LeCun, question whether LLMs will ever achieve AGI, arguing that "[a] system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe."

Adoption of AI Tools Across Industries

There is a sense that, for all of the promise of AI, many businesses have been slow to embrace it. But that is changing, and corporate AI use is likely to be ubiquitous by the end of 2025. Indeed, we are starting to see this shift already; according to a Bipartisan Policy Center report, the information sector has hit an 18.1% adoption rate while professional, scientific and technical services are at 12%. A McKinsey survey found that 72% of organizations have adopted AI in at least one business function, a notable increase from prior years. With AI becoming increasingly human-like in the coming months, expect adoption rates across industries to skyrocket, perhaps with a corresponding growth in the unemployment rate.

Regulatory Landscape Under the Trump Administration

The regulatory environment for AI in the United States is being overhauled under the Trump administration. Just three days after his inauguration, President Trump revoked the Biden administration's Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Less than three weeks later at the AI Action Summit, Vice President Vance reaffirmed the administration's hands-off approach to AI regulation, warning global leaders that excessive regulation could cripple the AI industry. Andrew Ferguson, the new head of the Federal Trade Commission, has pledged to scale back the agency's AI enforcement activities, while the appointment of Silicon Valley's David Sacks as the White House's inaugural "AI Czar" suggests a more pro-business approach to the AI industry going forward.

But, as Sir Isaac Newton's third law of motion provides, "every action has an equal and opposite reaction," and we are already seeing states – especially blue states – pick up the regulatory mantle. For example, last September, California passed 18 new laws regulating AI, 15 of which took effect at the start of this year. The Colorado AI Act, passed last May and set to become effective on February 1, 2026, requires developers and deployers of "high-risk" AI systems to use "reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination" of such systems. New York legislators have introduced a bill in the Assembly and Senate that would require generative AI providers to "conspicuously display" warnings that inform users that outputs of the generative AI system may be "inaccurate and/or inappropriate."

Next-Generation AI Legal Issues

The release of generative AI tools sparked a wave of lawsuits raising the issue of whether the unauthorized use of third-party content for machine learning purposes is a fair use; these cases are working their way through the court system and will yield substantive rulings on the merits in the coming months. But, as AI technology continues to improve and becomes more widespread, keep an eye out on these emerging legal issues as well:

  • How to allocate liability for false information and other problematic outputs produced by AI-fueled chatbots? To what extent might an AI tool developer, or a company making available the AI tool to its customers, or the customers themselves, be liable for such outputs? There is already a Canadian court decision holding Air Canada liable where its online chatbot had negligently misrepresented the company's bereavement fare policy; in this country, however, liability issues may be complicated by the extent to which Section 230 of the Communications Decency Act may provide protection for the output of generative AI tools.
  • AI-fueled autonomous agents are starting to be made available to companies and consumers; such agents can shop for groceries, book travel accommodations and make restaurant reservations for their users. As these agents become more common, will they be "click accepting" or otherwise consenting to online contractual terms in the course of performing their tasks and, if so, to what extent will users by bound by such terms?
  • As generative AI becomes integral to the creative process, U.S. courts will increasingly face the difficult task of weeding out unprotectable AI-generated material from protectable human-authored content in disputed works. This challenge is compounded by the fact that the U.S. Copyright Office, while requiring applicants to disclaim AI-generated components of to-be-registered works, does not mandate that applicants identify those components with specificity in the registration application. As a result, in future copyright infringement suits, it may prove especially difficult to assess both ownership and the scope of protectable expression, leading to more fact-intensive inquiries (with a greater likelihood of jury trials) and higher litigation costs.
    • One emerging best practice in anticipation of this coming issue is for content creators to seek to contemporaneously document their human contributions to the raw output of AI tools.

Concluding Thoughts

As we look ahead, 2025 is poised to be a pivotal year in the ongoing integration – and disruption – of AI across industries and legal frameworks. The technological leaps of the past two years have brought generative AI tools from the margins into the mainstream, accelerating business adoption, regulatory activity and legal uncertainty. Although there will undoubtedly be many critical issues raised by AI becoming commonplace, the legal, business, societal and technical challenges discussed above will be among those shaping AI development and its impact on businesses and consumers.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More