ARTICLE
27 October 2025

Dechert Cyber Bits Issue 84 - October 23, 2025

D
Dechert

Contributor

Dechert is a global law firm that advises asset managers, financial institutions and corporations on issues critical to managing their business and their capital – from high-stakes litigation to complex transactions and regulatory matters. We answer questions that seem unsolvable, develop deal structures that are new to the market and protect clients' rights in extreme situations. Our nearly 1,000 lawyers across 19 offices globally focus on the financial services, private equity, private credit, real estate, life sciences and technology sectors.
On September 26, 2025, the California Privacy Protection Agency ("CPPA") adopted a sweeping Stipulated Final Order ("Order") with Tractor Supply Company ("Tractor Supply") for violations of the California Consumer Privacy Act ("CCPA").
United States Technology
Dechert LLP’s articles from Dechert are most popular:
  • within Technology topic(s)
  • in United States
  • with readers working within the Banking & Credit industries
Dechert are most popular:
  • within Technology, Wealth Management and Tax topic(s)

Tractor Supply Agrees to Pay $1.35 Million Fine—The CPPA's Largest to Date

On September 26, 2025, the California Privacy Protection Agency ("CPPA") adopted a sweeping Stipulated Final Order ("Order") with Tractor Supply Company ("Tractor Supply") for violations of the California Consumer Privacy Act ("CCPA"). The CPPA first began its investigation into Tractor Supply in 2024 after it received a complaint from a California consumer, and the public disclosure of this investigation was previously discussed here.

In the Order, the CPPA determined, among other things, that Tractor Supply: (i) failed to maintain an adequate privacy policy informing consumers of their rights under California law; (ii) failed to provide notification to job applicants of their privacy rights under California law; (iii) failed to provide a sufficient mechanism for consumers to opt-out of having personal information sold or shared; and (iv) provided personal information to third parties (such as service providers and contractors) without adequate contractual privacy protections.

The Order requires, among others, that Tractor Supply: (i) pay a $1.35 million fine; (ii) modify its procedures for consumers to submit opt-out requests regarding the selling and sharing of their data, including by conducting quarterly scans (among other ways); and (iii) for a period of four years, provide a certificate of compliance to the CPPA, maintain a monitoring program, and conduct annual reviews of its website and mobile applications. Michael Macko, the CPPA's head of enforcement, explained that the CPPA "made it an enforcement priority to investigate whether businesses are properly implementing privacy rights, and this action underscores [the CPPA's] ongoing commitment to doing that for consumers and job applicants alike."

Takeaway: The Order is unique in two main respects. First, the $1.35 million fine is the CPPA's largest fine to date. Second, Tractor Supply agreed to the CPPA's findings as a condition of the settlement, meaning Tractor Supply admitted to privacy violations as opposed to structuring the settlement as resolving allegations. The Tractor Supply settlement underscores the CPPA's increasingly aggressive enforcement of California privacy laws, targeting both consumer and employee data. Businesses should consider taking steps to confirm their opt-out mechanisms, including browser-based signals like Global Privacy Control (GPC), are functional and seamless across all technologies, including third-party tracking tools used for advertising. Privacy notices for job applicants are also under scrutiny, as California law does not exempt this data category. The CPPA is also closely reviewing contracts with service providers and third parties for compliance with CCPA requirements. It would be prudent for companies to regularly update privacy policies, review data practices, verify that contracts with service providers, third parties, and contractors meet CCPA standards, and prepare for heightened enforcement efforts, as demonstrated by this record-setting fine and its recently announced joint investigative sweep with Colorado and Connecticut.

EU AI Act: Incident Reporting Template and Guidance

The European Commission has published draft guidance and a reporting template in relation to the requirement under the EU AI Act to report "serious incidents" and "widespread infringements of high-risk AI systems" (together, "Reportable Incidents") to regulators. A consultation on the draft guidance is open until November 7, 2025.

Under Article 73 of the EU AI Act, which will be applicable from August 2, 2026, providers of high-risk AI systems will be required to notify Reportable Incidents to their relevant regulator. Deployers of high-risk AI systems also have reporting obligations to the provider. The guidance is designed to help AI providers and deployers understand what constitutes a Reportable Incident and how the reporting obligations should be interpreted. The reporting template helps to break down these requirements and shows the information required in a structured format.

A large part of the guidance focuses on the definitions of "serious incident", "widespread infringement" and the harms resulting therefrom set forth in the AI Act. The guidance reviews each definition and the harms in detail. The AI Act envisages the provider of a high-risk AI system taking primary responsibility for reporting to regulators. However, deployers of the system must do so if "the deployer is not able to reach the provider". On this issue, the guidance is potentially controversial, requiring the deployer to take on the provider's reporting obligations if the provider does not answer within 24 hours.

Finally, the guidance attempts to tackle the interplay with reporting obligations under other laws, such as NIS2 and DORA. In these cases, incidents should be reported in accordance with those laws, with additional reporting requirements under the AI Act only for violations of fundamental rights.

Takeaway: Generally, the guidance demonstrates a pragmatic, business-friendly approach. Nonetheless, the practical examples and use cases that will be collected as part of the consultation will provide important context to the Commission. We expect there will be significant input from stakeholders in the consultation process, particularly with respect to the requirement for deployers to step into the provider's shoes where the provider cannot be reached, and hope for some movement from the Commission on this requirement in the final guidance.

Cash App to Pay $375,000 After Data Incident

On October 1, 2025, the U.S. Financial Industry Regulatory Authority ("FINRA") and Cash App Investing LLC ("Cash App") entered into a settlement agreement (the "Agreement") to resolve alleged violations of Regulation S-P and the FINRA Rules. The alleged violations relate to a self-reported data security incident that occurred when a former Cash App employee downloaded the personal information of millions of Cash App's customers two months after ending his employment. The stolen personal information included names, account numbers, and (for many users) investing account values and holdings; it did not include social security numbers, addresses, birthdates, or usernames and passwords. Notably, the employee involved was the one who had developed the database at issue. Under the Agreement, Cash App "accepts" FINRA's findings without admitting or denying them.

According to the Agreement, Cash App allegedly failed to: (i) disable the ex-employees' access credentials because Cash App's supervisory system implementing its cybersecurity policies and procedures overlooked the at-issue database; and (ii) monitor the at-issue database. The Agreement also states that Cash App took three months to uncover the unauthorized access to the database. Upon discovery of the incident, Cash App took prompt remedial action by, among other things, locking out the unauthorized user, implementing its cybersecurity protocols and procedures, and notifying the relevant regulators. As part of the Agreement, Cash App agreed to pay a $375,000 fine and accept an agency censure.

Takeaway: The resulting enforcement action highlights the unforgiving way in which regulators treat companies that are trying to do the right thing (here a self-discovered, self-disclosed, self-remediated breach, nonetheless led to a large fine for the company). This action by FINRA underscores the critical importance of proactively managing ex-employees' access to internal systems and data, particularly after termination—regardless of seniority or role. For financial services firms and businesses broadly, this case highlights the need for robust cybersecurity policies that include comprehensive access controls, regular audits of all databases, and swift detection mechanisms for unauthorized activity. Unfortunately, the result also highlights that delayed action and overlooked systems can lead to significant enforcement risks, even in the face of thorough remedial action and proper notification.

Sendit App Operator and its CEO Face Lawsuit Over Alleged COPPA Rule Violations and Deceptive Practices

On September 29, 2025, the U.S. Department of Justice ("DOJ") upon referral from the Federal Trade Commission ("FTC") filed a complaint in the United States District Court for the Central District of California against the operator of the "sendit" app, Hearts Holding, Inc. ("Hearts Holding"), and its CEO (collectively, the "Defendants"). Sendit is an app that integrates with Snapchat and is intended for teenagers and children. Among other features, it provides a way for users to interact with friends and anonymously post questions and answers on social media platforms.

The complaint alleges that the Defendants violated the Children's Online Privacy Protection Rule ("COPPA Rule"), the FTC Act, and the Restore Online Shoppers' Confidence Act ("ROSCA"). With respect to COPPA, the complaint alleges that sendit is both directed towards children and that Defendants have actual knowledge they are collecting personal information from children. The alleged violations of COPPA include, among other things: (1) failing to provide notice to parents about the Defendants' collection and use of children's data; (2) failing to obtain parental consent; and (3) failing to limit the collection of children's personal information for which they lacked verifiable parental consent. With regard to its alleged violations of ROSCA, the FTC accused sendit of misleading behavior to encourage customers to purchase subscriptions, including the sending of fake "catfishing" messages to "lure" subscribers in. The complaint seeks injunctive relief, monetary relief, and civil penalties.

Takeaway: This case suggests that the FTC's heightened interest in children's privacy that began during the Biden administration remains an enforcement priority, especially when it comes to businesses targeted at young audiences. It also shows the government did not suspend its practice of pressuring business executives in negotiations with the company on these types of actions, given that the CEO was named in the complaint individually. This ups the stakes for all businesses with respect to the FTC's enforcement power. Businesses should consider treating children's privacy as a high compliance risk and candidly assess whether they could be accused of collecting data from children or of targeting their online services to children and, if so, ensuring COPPA Rule compliance in an ever-evolving technology landscape. This includes taking reasonable steps to see that children's data is not collected without parental consent and providing sufficient and direct notice to parents about children's data collection pursuant to the COPPA Rule.

UK Tribunal Confirms Broad Reach of UK GDPR in Clearview AI Case

In a recent lengthy appeal decision, the UK Upper Tribunal reviewed the territorial and material scope of the GDPR.

In 2022, the UK Information Commissioner's Office ("ICO") issued Clearview AI with an enforcement action including a fine of around £7.5million and an order to delete certain data for breaches of the UK GDPR in relation to its facial recognition data. At first instance, Clearview successfully challenged the action on the basis that its activities were outside the scope of the UK GDPR because its services were provided exclusively for the purposes of foreign state national security and criminal law enforcement. For an overview of the first instance decision and further background, see our OnPoint. The Upper Tribunal has now overturned the First-tier Tribunal's decision.

The Upper Tribunal held that there is no general exemption from the UK GDPR for businesses providing services to foreign states even if those services are limited to the fields of national security and criminal law enforcement. Clearview was providing services independently on a commercial basis, and not as an agent of the foreign states to whom it provided its services. Clearview could not, therefore, rely on international principles of foreign state immunity to exempt it from the UK GDPR.

In relation to the territorial scope of the UK GDPR, Clearview argued that its customers (e.g. foreign intelligence agencies) carried out the behavioral monitoring and this did not bring Clearview's own activities within scope of the UK GDPR. The Upper Tribunal rejected this and gave the territorial scope provisions of the UK GDPR an expansive meaning, holding that:

1.The UK GDPR applies not only to controllers who themselves conduct behavioral monitoring, but also to controllers whose data processing is related to behavioral monitoring carried out by another controller.

2. In any event, Clearview's own activities amounted to behavioral monitoring because this encompasses "passive" collection, sorting, classification and storing of data by automated means with a view to potential subsequent use (including by another controller) of personal data processing techniques which consist of profiling a natural person.

Takeaway: The Upper Tribunal's expansive interpretation of the territorial scope of the UK GDPR and the scope of behavioral monitoring confirms the broad reach of the UK GDPR in the view of the courts. Non-UK established companies that engage in any monitoring-type activities, or that provide services that enable their customers to conduct monitoring, will want to conduct a careful analysis of their activities to determine the extent to which those activities may fall directly within the scope of the UK GDPR.

Dechert Tidbits

Imgur Blocks Access to UK Users After Fine Warning from UK Data Regulator

Imgur, a popular image-hosting platform widely used on Reddit and forums, has blocked access for UK users. The UK Information Commissioner's Office ("ICO") had recently issued MediaLab AI, Imgur's parent, a notice of intent to fine following an investigation into age checks and the handling of children's personal data.

European Commission launches AI Act Desk Service and Single Information Platform

The European Commission has launched the AI Act Service Desk and Single Information Platform, described as a "central hub", to help organizations implement the EU AI Act, offering guidance, interactive tools, and a direct channel for queries.

G7 Cyber Expert Group Statement on AI Cybersecurity Risks for the Financial Sector Published

The G7 Cyber Expert Group recently issued a statement warning that rapidly evolving AI—especially generative and agentic systems—both strengthens cyber defenses and amplifies threats and AI-specific vulnerabilities, reshaping risk in finance. While it does not constitute guidance or regulatory expectation, the statement urges executives and authorities in the financial sector to treat AI as a safety and resilience priority, tighten governance and secure-by-design practices, assure data integrity and monitoring, harden identity and incident response, and build AI literacy.

California Passes First-of-its-Kind Law Requiring Browser Ad Tracking Opt-Outs

On October 8, 2025, California Governor Gavin Newsom signed the California Opt Me Out Act—the first law of its kind in the United States—which mandates that web browsers provide users with opt-out preference signals to communicate their privacy preferences to websites. If enabled, opt-out preference signals will communicate to websites that a user does not want their personal information sold, shared, or used for behavioral advertising. The law will take effect in January 2027.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More