ARTICLE
31 March 2026

Dechert Cyber Bits – Issue 93 – March 26, 2026

D
Dechert

Contributor

Dechert is a global law firm that advises asset managers, financial institutions and corporations on issues critical to managing their business and their capital – from high-stakes litigation to complex transactions and regulatory matters. We answer questions that seem unsolvable, develop deal structures that are new to the market and protect clients' rights in extreme situations. Our nearly 1,000 lawyers across 19 offices globally focus on the financial services, private equity, private credit, real estate, life sciences and technology sectors.
This month, the Oregon state legislature passed SB 1546, a bill to regulate artificial intelligence ("AI") companion chatbots.
United States Oregon Technology
Dechert are most popular:
  • within Real Estate and Construction and Strategy topic(s)

Oregon Legislature Passes AI Companion Bill Including a Private Right of Action

This month, the Oregon state legislature passed SB 1546, a bill to regulate artificial intelligence ("AI") companion chatbots. Governor Tina Kotek has 30 weekdays (i.e., until April 17) to sign, veto, or allow the bill to become law without her signature. The bill targets AI systems designed to simulate ongoing relationships concerning personal or emotional topics. The legislation comes in response to growing concerns over the link between AI chatbots and suicidal ideation, especially in teens.

The bill requires operators who make AI companions available in Oregon to provide a clear and conspicuous notice that the user is interacting with AI-generated output, not a natural person, including periodic reminders. Operators must also establish policies and protocols for detecting suicidal or self-harm ideation and for preventing the AI companion from encouraging such ideations and, at a minimum, direct the AI companion to refer at-risk users to the national suicide and crisis lifeline, among other safeguards. The bill provides additional safeguards for minors, including requiring operators to take reasonable measures to prevent the AI companion from generating statements suggesting it is a real person, simulating romantic interest, or encouraging the user to keep using it. Operators must also check that the AI companion reminds minor users to take a break at least every three hours.

Importantly, the legislation creates a private right of action for individuals allegedly injured by an operator's violation to recover actual damages or statutory damages of $1,000 per violation (whichever is greater), as well as injunctive relief and attorneys' fees. While the legislation attempts to carve out certain categories of AI tools, such as customer service chatbots or standalone devices, the bill contains significant ambiguities bound to be litigated by the plaintiffs' bar. For example, key provisions turn on whether an AI tool asks questions concerning "emotional topics" and whether it has ongoing conversations "concerning matters that are personal to the user" without defining what topics are "emotional" or "personal."

Takeaway: If the Oregon bill is signed into law, any company offering AI chatbots—whether they are the developer of the AI tool in question or merely "deploying" a third party AI tool—will want to carefully assess whether the law applies to them, or could be read to apply to them, and consider taking steps to comply with its requirements. Moreover, the $1,000 per violation private right of action will likely generate a new wave of litigation against companies offering such tools in Oregon. Given the law's uncertain applicability and this litigation risk, companies should consider reviewing applicable Terms of Use for arbitration clauses and related terms.

CalPrivacy Enters Settlement with Ford Over Alleged “Unnecessary Friction” in Consumer Privacy Opt-Out Process

On March 5, 2026, the California Privacy Protection Agency (“CalPrivacy”) announced a settlement (“Settlement”) with Ford Motor Co. (“Ford”). The California Consumer Privacy Act (“CCPA”) gives consumers the right to opt out of businesses' sharing their personal information. CalPrivacy alleged that Ford violated the CCPA by requiring consumers to verify their identities before processing their opt-out requests and not processing requests submitted by consumers who failed to complete the email verification step. According to CalPrivacy, Ford's opt-out practices created “unnecessary friction” for consumers seeking to exercise their right to opt out of the sale and sharing of their personal information under CCPA.

As part of the Settlement, Ford agreed to pay a fine of $375,503, change its practices to provide consumers with an “easy” opt-out method that requires “minimal steps,” and audit the tracking technologies deployed on its website to maintain compliance with opt-out preference signals. Under the Settlement, Ford does not acknowledge wrongdoing.

The settlement follows an investigation by CalPrivacy's Enforcement pision, conducted with Ford's cooperation, into the company's privacy practices and its compliance with the CCPA. The investigation was part of a broader inquiry into vehicle manufacturers' privacy practices, similar to CalPrivacy's enforcement action against American Honda Motor Co. (see prior discussion in Cyber Bits Issue 73), and comes against the backdrop of CalPrivacy's ongoing “investigative sweep” in partnership with regulators in Colorado and Connecticut.

Takeaway:  CalPrivacy's settlement with Ford shows that the agency is serious about compliance with the technical requirements of the CCPA and continues to be focused on opt-outs and the ease of exercising consumer rights. Companies should take a close look at all elements of their CCPA compliance programs and opt out processes to make sure they meet the requirements to a “T,” as general efforts that are not carefully tailored to regulatory requirements and consumer ease may not be accepted.

European Commission Updates Draft Code of Practice on AI Transparency

The European Commission has released the second draft of its Code of Practice on transparent AI systems intended to help organizations comply with the wide-ranging transparency obligations under Article 50 of the EU AI Act. The updates follow feedback received on the first draft earlier this year (see Cyber Bits Issue 88 for our article on the first draft). These transparency obligations, which apply to any AI system that interacts with inpiduals, will take effect on August 2, 2026. The Commission believes that its second draft is more streamlined and designed to be easier for providers and deployers of AI to implement.

For providers of generative AI, the draft introduces a revised two‐layered approach to marking and detecting AI-generated or manipulated content, allowing flexibility to use optional techniques such as fingerprinting and logging. For deployers, simplified measures for labelling deepfakes and AI-generated or manipulated text are introduced, including design and placement requirements for a proposed EU common logo for AI-generated content.

The Commission is inviting stakeholders to submit feedback, including on the proposed EU icon, by March 30, 2026. Following this consultation, it aims to publish the final version of the Code by June 2026.

Takeaway: Organizations subject to the AI Act's transparency obligations will be looking ahead to the current August 2026 deadline for compliance with some trepidation, especially if a final Code does not arrive until June. While the draft Code may well be subject to further change, in the interests of timing, organizations subject to the transparency obligations will want to carefully review the revised draft Code and consider their approach to AI transparency obligations in light of the current draft.

European Data Regulators Issue Opinion on the European Biotech Act Proposal

The European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) have issued a joint opinion on the European Commission's draft European Biotech Act. The Commission issued its proposal for the Act on December 16, 2025, and the Act aims to bolster Europe's biotechnology and biomanufacturing capabilities. The proposed Biotech Act would harmonize and clarify how the GDPR applies in the context of clinical trials, and would support the re-use of clinical trial data for further research.

Both bodies welcome the proposal to establish a single, clear legal basis for processing personal data for the purposes of clinical trials but emphasized that the processing should still be limited to what is necessary for that purpose.

The EDPB and EDPS support the proposal to provide clarity regarding the roles under the GDPR of different parties involved in clinical trials. However, they recommend specifying whether sponsors and investigators are independent controllers or joint controllers and reconsidering whether inpidual clinical investigators should be controllers or whether their host institution should be subject to that responsibility.

They further call for permissible secondary uses of trial data for further clinical trials or broader scientific research to be more clearly defined. To safeguard participant privacy, the opinion also recommends explicit requirements in the existing Clinical Trials Regulation for pseudonymization whenever direct identifiers are unnecessary.

Takeaway: While the EDPB and EDPS broadly support the proposed European Biotech Act and its objectives, their opinion highlights several data protection concerns that they recommend are addressed. The opinion is not legally binding but it does hold significant sway and is likely to directly affect the legislative process. Organizations who are likely to be impacted by the proposed Act will want to keep an eye on the legislative process and input from key stakeholders such as the EDPB and EDPS. In any case, it is likely that data protection will form a key pillar of discussion in the legislative negotiations.

UK Competition Authority Publishes Guidance on Consumer-Facing AI Agents

The UK Competition and Markets Authority (“CMA”) has published guidance on complying with consumer law when using AI agents. According to the guidance, the UK now ranks as the world's third-largest AI market after the U.S. and China. The CMA highlights how AI has the potential to boost economic growth and improve everyday lives but reminds organizations to deploy “agentic” AI responsibly and in full compliance with consumer law. The guidance warns that organizations must remember that they are responsible for what an AI agent does in the same way as they are responsible for what an employee does.

Key practices highlighted by the CMA include:

  • transparent disclosure when customers interact with AI rather than a human, including considering labelling and accurate disclosure as to what the AI can and cannot do;
  • design and training so that AI agents respect statutory rights (e.g. cancellation or refund entitlements), avoid misleading statements, and secure any required consents;
  • ongoing human oversight to check that the AI agent is making correct decisions and generating expected results, including to identify “hallucinations,” incorrect outputs or non-compliant behaviour before and after deployment.

The guidance includes some helpful examples covering use of an AI agent to run a marketing campaign, to process refund requests, to respond to customer service queries, and to provide a service to customers.

Takeaway: The CMA's guidance is a helpful overview of some of the key points to be aware of when looking to implement AI agents interacting with customers, and a reminder that companies deploying AI agents may be held liable if the agent violates applicable law. Organizations using, or considering using, such AI agents will want to review the guidance and consider their own practices and procedures for their AI agents, and make amendments as needed.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More