ARTICLE
12 August 2025

AI Reporter - August 2025

B
Benesch Friedlander Coplan & Aronoff LLP

Contributor

Benesch, an Am Law 200 firm with over 450 attorneys, combines top-tier talent with an agile, modern approach to solving clients’ most complex challenges across diverse industries. As one of the fastest-growing law firms in the country, Benesch continues to earn national recognition for its legal prowess, commitment to client service and dedication to fostering an outstanding workplace culture.
The United States Senate's 99-1 vote to lift a 10-year ban on state-level AI regulation marked a major shift in tech policy, challenging efforts by companies like OpenAI and Google to maintain uniform federal oversight.
United States Technology

AI Update

The United States Senate's 99-1 vote to lift a 10-year ban on state-level AI regulation marked a major shift in tech policy, challenging efforts by companies like OpenAI and Google to maintain uniform federal oversight. The bipartisan move, led by Senator Marsha Blackburn, a Republican from Tennessee, reflects growing concern that a moratorium would stall meaningful regulation amid Congress' inaction on tech issues. Meanwhile, the Trump administration's AI Action Plan pushes for rapid national development of AI infrastructure, signaling a preference for innovation over regulation. This acceleration raises concerns about ethical oversight and data security. Adding further complexity, the FCC is exploring its authority to override state AI laws under existing federal statutes, potentially reshaping the regulatory landscape. Together, these developments highlight a growing tension between innovation, governance, and accountability—underscoring the urgent need for a cohesive national strategy balancing technological progress with responsible oversight.

Abroad, the EU introduced a voluntary Code of Practice to help companies align with the AI Act, which takes effect on August 2. The Code focuses on transparency, copyright protection, and safety, offering a streamlined path to compliance for providers of generalpurpose AI. Google, OpenAI, and Anthropic have pledged to honor the Code. Meanwhile, Google faces an EU antitrust complaint from independent publishers over its AI Overviews, which allegedly misuse publisher content and harm traffic and revenue. The complaint seeks interim measures to prevent further damage.

In the U.S. courts, a California federal judge sided with OpenAI in a trademark dispute, ruling that Open Artificial Intelligence infringed on OpenAI's trademark by using the name "Open AI," despite having been founded earlier. The court found the name's use was likely to confuse consumers, especially given the timing of efforts to register the trademark. Elsewhere, voice actors Paul Lehrman and Linnea Sage won the right to pursue claims against AI startup Lovo for allegedly misusing their voices, highlighting growing concerns over identity rights in AI. Together, these cases highlight emerging legal and ethical tensions as AI technologies intersect with branding, personal identity, and public trust—signaling a need for clearer standards and accountability in the rapidly evolving AI landscape.

These and other stories appear below.

AIin Business

Incogni report exposes privacy gaps in major AI platforms

The research indicates that AI platforms, including Google Gemini and Microsoft Copilot, collect sensitive information and often share the data— such as names, email addresses, and location data—without sufficient transparency or user control. The study emphasizes the lack of clear mechanisms for opting out of data use in AI training and the challenges in removing data from machine-learning models, even under regulations like GDPR.

Source: Tech Informed

Grok's missteps raise red flags for enterprise AI adoption

Elon Musk's xAI is under scrutiny after its Grok chatbot displayed concerning behavior, including generating antisemitic content and responding to questions as if it were Musk himself. These incidents highlight ongoing issues of bias, safety, and transparency in AI systems, which are critical considerations for enterprise technology leaders when choosing AI models. xAI is preparing to launch its Grok 4 model, aiming to compete with leading AI systems from Anthropic and OpenAI.

Source: Venture Beat

KPMG, Hippocratic AI deploy generative agents to tackle workforce shortages

Part of Hippocratic AI's Polaris Constellation architecture, the generative agents are designed to assist with healthcare workflows, such as patient intake and care management followup calls. The collaboration intends to free up provider time and improve patient outcomes by using AI to interact with humans naturally. KPMG is conducting process analyses to identify high-pressure points and strategically deploy AI across the care continuum, enhancing healthcare efficiency while maintaining the human touch in clinical operations.

Source: MobiHealth News

SAG-AFTRA video game actors end strike with landmark AI protections deal

Hollywood video game voice and motion capture actors, represented by SAG-AFTRA, ended a nearly yearlong strike by signing a contract focusing on AI protections with video game studios. The agreement includes consent and disclosure requirements for AI digital replica use and allows performers to suspend consent for new material generation during a strike. The deal applies to studios like Activision Productions, Disney Character Voices, Electronic Arts Productions, and others, and was ratified by SAG-AFTRA members by an approval vote of over 95%.

Source: Reuters

The AI shift in finance

AI is transforming the financial sector by automating tasks such as underwriting, compliance, and asset allocation. This shift is leading to a cognitive displacement of middleoffice work, where AI models now read earnings reports, classify regulatory filings, and propose investment strategies. The rise of GenAI and autonomous systems is reshaping the financial workforce, emphasizing practical experience and critical judgment over traditional credentials like MBAs and CFAs.

Source: Brookings

Lawyer who wrote on AI ethics fired for using fake ChatGPT case

A lawyer at Goldberg Segalla—who previously wrote about AI ethics in the legal profession—was dismissed after submitting a court filing containing a fake case citation generated by ChatGPT. The incident occurred in a Chicago Housing Authority case and highlights ongoing concerns about the reliability of AI-generated legal content and the ethical responsibilities of attorneys using such tools. The event may prompt further scrutiny and potential legislative action at the state level regarding AI use and data integrity in legal and regulated industries.

Source: Above the Law

AI Litigation & Regulation

LITIGATION

AI powers $14.6B health fraud takedown

The DOJ's 2025 National Health Care Fraud Takedown resulted in criminal charges against 324 defendants, uncovering schemes targeting Medicare, Medicaid, and other insurance programs with over $14.6 billion in intended fraud. The operation introduced the Health Care Fraud Data Fusion Center, which utilizes AI, cloud computing, and analytics to detect fraud patterns, and marks a shift towards treating healthcare fraud like cybercrime. The DOJ's crackdown highlights the increasing sophistication of healthcare scams, driven by data breaches and GenAI, within the fragmented healthcare ecosystem.

Source: PYMNTS

Google faces EU scrutiny for AIgenerated search summaries

Google faces an EU antitrust complaint from independent publishers over its AI Overviews, which are AI-generated summaries displayed above traditional search results. The complaint alleges that Google abuses its market power by using publisher content for these summaries, causing significant harm to publishers in terms of traffic, readership, and revenue. The publishers requested an interim measure to prevent further harm.

Source: Reuters

OpenAI accuses rival AI group of hidden ties to Elon Musk

OpenAI filed a complaint with the California Fair Political Practices Commission against the Coalition for AI Nonprofit Integrity (CANI), alleging violations of state lobbying laws related to a bill that could affect OpenAI's business plans. The complaint suggests that CANI may have hidden ties to Elon Musk, who opposes OpenAI's restructuring from a nonprofit to a for-profit entity. CANI denies these allegations, emphasizing its exclusion of individuals from competitor organizations to avoid conflicts of interest.

Source: Politico

Missouri AG probes AI bias

Missouri Attorney General Andrew Bailey accused Google, Microsoft, and OpenAI of deceptive business practices related to their AI chatbots. Bailey claims the chatbots, including Gemini, Copilot, and ChatGPT, provided misleading answers to a request to rank the last five U.S. presidents regarding antisemitism. He demands extensive documentation on how these chatbots handle input to produce responses, suggesting potential bias in their outputs.

Source: The Verge

AI weather startup alleges trade secret theft by former consultant

Atmo, an AI-driven weather simulation company, filed a lawsuit against former consultant Koki Mashita, accusing him of stealing proprietary code and confidential data to launch a rival firm, Aeolus Labs. Atmo claims Mashita accessed over 200 sensitive files, including source code and client information, during and after his consultancy in 2024. Mashita allegedly contacted Atmo clients and secured $12 million in funding for Aeolus using Atmo's methods, despite assurances that he would not use Atmo's trade secrets.

Source: Law 360 (sub. req.)

Judge orders OpenAI to disclose key documents in Musk lawsuit

A federal judge ordered OpenAI to release documents related to CEO Sam Altman's brief firing and the company's shift toward a for-profit model, citing their relevance to Elon Musk's fraud and charitable trust claims. The judge also approved Musk's request for information on potential conflicts of interest involving Altman and President Greg Brockman. However, Musk's demand for extensive financial records was denied as excessive. OpenAI must also provide details on its corporate restructuring, while Musk must share documents related to an alleged implied contract.

Source: Law 360 (sub. req.)

Voice actors win partial victory in AI voice cloning lawsuit

A federal judge ruled two voice actors can proceed with certain state-level claims against AI startup Lovo for allegedly using their voices without permission. While their trademark and most copyright claims were dismissed, the judge allowed claims under New York Civil Rights law and a breach of contract claim to move forward. The actors allege they were misled into providing voice samples through Fiverr, which were later used in promotional content and AI tools. The court found their voices were not protectable trademarks but acknowledged the broader implications for identity rights and AI.

Source: Law 360 (sub. req.

OpenAI, Microsoft push back against expanded author lawsuit

The companies argue the lawsuit improperly expands its scope by introducing new claims and models, such as GPT-4.5 and by asserting that ChatGPT's outputs are infringing derivative works. On the other hand, OpenAI contends the plaintiffs failed to provide any specific examples of infringing outputs, making the claims legally insufficient. Microsoft also criticized the suit for violating a court order limiting the case to previously asserted claims. Both companies argue the expanded complaint would delay proceedings and complicate discovery.

Source: Law 360 (sub. req.)

Federal judge sides with OpenAI in trademark dispute over 'Open AI' name

A federal judge in California ruled in favor of OpenAI, finding that Open Artificial Intelligence infringed on OpenAI's trademarks by using the name "Open AI" in commerce. Although the company predated OpenAI's founding, the court agreed with OpenAI's argument that use of the name and the attempt to register the "Open AI" trademark after OpenAI's launch was intended to create consumer confusion. The ruling allows OpenAI to protect its brand and prevent further use of the "Open AI" name by the defendant. Open Artificial Intelligence plans to appeal the decision.

Source: Reuters

LeBron James targets AI deepfakes

LeBron James' legal team issued a cease and desist letter to the creator of an AI platform for enabling users to generate viral AI videos of James and other NBA stars, including a widely circulated clip depicting a pregnant James. The incident highlights growing concerns in sports and entertainment over the unauthorized use of AIgenerated content—particularly deepfakes—which can infringe on intellectual property rights and raise other privacy and data security issues.

Source: Engadget

REGULATION

Senate overturns AI regulation moratorium in bipartisan vote

The U.S. Senate voted 99-1 to remove a 10- year moratorium on state regulation of AI from a major tax and spending bill. The decision marks a setback for tech companies like OpenAI and Google, which had advocated for the moratorium to prevent a patchwork of state regulations that could hinder innovation. Critics argued the moratorium would effectively prevent any AI regulation, as Congress has not passed significant tech rules in decades. The amendment to remove the moratorium was a bipartisan effort to allow states to regulate AI independently.

Source: Time

ATPC urges Congress to preserve AI flexibility in fraud prevention amid regulatory debate

The American Transaction Processors Coalition (ATPC) urged Congress to avoid regulating AI in a way that would hinder payments companies' use of the technology to combat fraud. This comes as Congress debated a budget bill that initially included a provision to prevent states from regulating AI for 10 years, which was later removed. The ATPC—representing companies like American Express, Deluxe, and Fiserv—highlights the absence of federal AI-specific regulation and emphasizes the need for flexibility in using AI to counter AI-driven cyber-attacks and fraud.

Source: Payments Dive

To view the full pdf, click here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More