- within Privacy topic(s)
- in United States
- with readers working within the Banking & Credit industries
- within Wealth Management, Antitrust/Competition Law and Tax topic(s)
Congratulations to Brenda Sharton!
Brenda Sharton, global chair of Dechert's Cyber, Privacy and AI practice group, recently was named a 2025 Law360 MVP for Cybersecurity & Privacy for the third time. Law360 awards this recognition annually to only a handful of lawyers who have "distinguished themselves from their peers by securing hard-earned success in high-stakes litigation, complex global matters or record-breaking deals" in the past year.
California, Connecticut, and New York Attorneys General Obtain a $5.1 Million Settlement with Education Technology Company Illuminate Following Data Breach
The California, Connecticut, and New York attorneys general announced three separate settlements, totaling $5.1 million, with Illuminate Education, Inc. ("Illuminate") following a 2021 data breach at the education technology vendor affecting approximately 3 million California students, 28,610 Connecticut students, and 1.7 million New York students.
Illuminate contracts with school districts and schools to track students' attendance, grades, and behavioral and mental health. But, in December 2021, a hacker used the credentials of a former employee—who had left the company three years earlier—to access Illuminate's network, create new credentials to enable future access, and reach both active and backup databases containing sensitive personal and medical information, including student names, race, whether a student received special accommodations, and coded medical conditions.
Following an investigation, the California DOJ determined that Illuminate failed to implement basic safety precautions, including terminating login credentials of former employees, monitoring and alerting for suspicious activity, and securing backup databases separately from active databases. The investigation also concluded that Illuminate made false and misleading statements in its privacy policy by claiming its safety measures met or exceeded state and federal legal requirements. Similarly, the New York and Connecticut Attorneys General determined that Illuminate failed to implement basic security measures to protect student data, including monitoring for suspicious activity.
According to Caifornia's proposed final judgment implementing the settlement terms, Illuminate must notify the California DOJ of breaches involving student data, implement real-time monitoring and alerts for suspicious activity, disable and regularly audit former and active credentials, and store backup databases in separate network segments. New York's settlement requires the same measures and additionally mandates encryption of student data. Additionally, Connecticut's settlement requires Illuminate to review and conform all contracts with Connecticut school districts to comply with its Student Privacy Law, establish a right to delete data, monitor vendors, and obtain an information security assessment from a third-party assessor.
Takeaway: The collaboration among state attorneys general in this matter reflects a growing trend of coordinated enforcement at the state level, increasing the stakes for businesses handling sensitive data. Organizations should consider evaluating their data security practices, including access controls, monitoring, encryption, and breach response protocols, to mitigate risks and align with evolving regulatory priorities. Further, Connecticut's settlement marks the first enforcement action under its 2016 Student Data Privacy Law, signaling the importance the CT AG is likely to place on data security. Choosing this case for its first action suggests an intent to send a strong message about compliance with Connecticut's student data privacy law—covering not only data security safeguards but also mandated contract terms between school districts and third-party vendors, posting and parent-notice requirements, limits on use and disclosure, deletion and retention obligations, and breach-notification duties.
Revised Guidance on Generative AI for EU Institutions
The European Data Protection Supervisor ("EDPS"), the EU authority responsible for supervising data processing by EU institutions, has published new guidance on data protection issues arising from the use of generative AI. The guidance revises and expands on its prior guidance on the topic.
The guidance addresses important issues of EU data protection law in the context of generative AI by answering a series of questions, including:
- How to determine roles and responsibilities in generative AI systems?
- How to know if the use of a generative AI system involves personal data processing?
- When should a data protection impact assessment be carried out?
- How can the principle of data minimization be guaranteed when using generative AI systems?
- How can fair processing be ensured and how to avoid bias when using generative AI systems?
The guidance provides a structure for EU institutions to assess their data protection obligations, including warnings of key areas of risk and examples of how to approach the analysis of data protection issues.
Takeaway: Although EU institutions are subject to a separate data protection regulation, those rules are largely equivalent to the GDPR. The EDPS's guidance considers various general principles and obligations of EU data protection law that apply similarly to private businesses. It is therefore a useful resource for private businesses considering their approach to generative AI or evaluating the implementation of a specific AI system.
Global Privacy Regulators Conduct Enforcement Sweep to Assess Websites' and Apps' Handling of Minors' Personal Data
During the week of Monday, November 3, a coalition of more than 30 data protection authorities in the Global Privacy Enforcement Network ("GPEN"), including the FTC, California Attorney General, and the California Privacy Protection Agency, ran a coordinated weeklong enforcement "sweep" to assess how websites and apps used by children handled minors' personal data.
During the sweep, authorities reviewed platforms commonly used by children, including social networks, online retailers, games, and educational sites, and regulators set out to check whether child-directed or child‑used websites and applications were transparent about their data collection practices, had age estimation or verification mechanisms, and used privacy‑protective controls to limit data collection. Moreover, regulators aimed to recreate the experience from the children's perspective, testing how easily users could locate their privacy information, delete an account, or make choices about their data. The coordinators of the sweep, the Office of the Privacy Commissioner of Canada, the UK Information Commissioner's Office, and Guernsey's Data Protection Authority, stated that the results will be published in the coming months.
Takeaway: The initiative reflects growing global concern that, while digital spaces offer benefits to children, they also pose risks such as tracking, profiling, targeting, and exposure to harmful content. For companies, this signals heightened scrutiny and potential follow up education or enforcement, making it critical to have robust age assurance, data minimization, parental consent where required, default high privacy settings, limits on profiling, and clear, child appropriate disclosures.
Texas Attorney General Ken Paxton Announces Historic $1.375 Billion Settlement with Google Regarding Privacy Violations
On October 31, 2025, Texas Attorney General Ken Paxton announced a $1.375 billion settlement with Google resolving two privacy lawsuits filed by the Texas Attorney General's Office in 2022. The lawsuits centered on Google's handling of geolocation information, incognito searches, and biometric data. In the first case, Texas alleged violations of the Texas Deceptive Trade Practices Act, asserting that Google misrepresented and concealed material facts about how it tracked, used, and monetized users' location information. In an amended petition, Texas further alleged that Google continued collecting data from users who enabled Incognito mode, contrary to Google's privacy policy representations that Incognito mode allowed users to browse privately. The second case alleged that Google unlawfully captured Texans' biometric identifiers such as voiceprints and facial geometry through products such as Google Photos, Google Assistant, and Nest Hub Max, in violation of Texas's Capture or Use of Biometric Identifier Act (CUBI).
Texas described this as the largest state settlement with Google for data-privacy violations, far exceeding prior resolutions (including a multistate coalition that included forty states that secured $391 million). In contrast, Google's spokesperson José Castañeda characterized the settlement as resolving an array of "old claims," said the company is "pleased to put them behind [it]," and noted that Google will "continue to build robust privacy controls" into its services. The company also stated that the settlement does not mandate any product changes.
Takeaway: The $1.375 billion settlement is an enormous sum and underscores the growing risks of state-level enforcement actions targeting privacy practices, particularly around geolocation tracking and biometric data. Texas's success signals a trend of heightened scrutiny and aggressive penalties, which could incentivize other states to pursue similar claims. Companies should proactively assess their data collection, usage, and disclosure practices to confirm compliance with applicable state and federal laws involving geolocation and biometric data. Clear, accurate privacy policies and robust controls are essential to mitigate legal exposure and reputational harm in this evolving regulatory landscape.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.