ARTICLE
20 January 2026

Dechert Cyber Bits – Issue 88 - January 15, 2026

D
Dechert

Contributor

Dechert is a global law firm that advises asset managers, financial institutions and corporations on issues critical to managing their business and their capital – from high-stakes litigation to complex transactions and regulatory matters. We answer questions that seem unsolvable, develop deal structures that are new to the market and protect clients' rights in extreme situations. Our nearly 1,000 lawyers across 19 offices globally focus on the financial services, private equity, private credit, real estate, life sciences and technology sectors.
In case you missed it, catch up on our Cyber Bits Crystal Ball edition. See our predictions for 2026.
United States Technology
Dechert are most popular:
  • within Consumer Protection, Antitrust/Competition Law, Government and Public Sector topic(s)
  • with readers working within the Banking & Credit industries

We see the future...

In case you missed it, catch up on our Cyber Bits Crystal Ball edition. See our predictions for 2026.

Check it out here.

FTC Rescinds Ban on Rytr's AI-Assisted Tool that Allegedly Burdened Innovation

Last month, the two active members of the Federal Trade Commission ("FTC") reopened and set aside the FTC's prior 2024 consent order with AI-company Rytr LLC ("Rytr"), which prohibited Rytr from providing an AI-enabled service that generated customer reviews or testimonials ("2024 FTC Order").

The 2024 FTC Complaint against Rytr ("2024 FTC Complaint") alleged that Rytr's AI tool violated Section 5 of the FTC Act ("Section 5") (see prior discussion in Cyber Bits here). However, the FTC recently re-examined the Rytr matter in connection with the Trump Administration's July 2025 America's AI Action Plan, which directed the FTC to review all "final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation." In its review of the 2024 FTC Order, the FTC found the facts alleged in the 2024 FTC Complaint insufficient to support the allegations that Rytr violated Section 5. As a result, the FTC stated in its set-aside order ("Set-Aside Order") that the 2024 FTC Order "fail[ed] to provide any benefit to consumers and the public," and placed an undue burden on AI innovation.

Specifically, the Set-Aside Order noted that the 2024 FTC Complaint "contains no allegations that Rytr created deceptive marketing material," only that its customers could use its tool to do so. Moreover, the FTC stated that the 2024 FTC Complaint could not point to an instance where false reviews were in fact created or where Rytr had actual or constructive knowledge that user-generated reviews were used to violate Section 5. The FTC further found that the 2024 FTC Complaint lacked sufficient facts to show Rytr's platform caused or is likely to cause any injury to consumers. Instead, the Set-Aside Order set forth that consumers benefit from the invention of new tools, "even though almost all tools have both legal and illegal uses."

Takeaway: The Set-Aside Order illustrates that the FTC is following through with President Trump's directive to identify and rescind Orders that burden AI innovation. We expect this FTC to not only continue to review and rescind Biden Administration orders that it views as having been the result of overreach, but to be receptive to arguments from industry that regulating big tech, and AI in particular, can impede innovation. Companies developing or deploying AI that are under investigation or become subject to it will want to employ these arguments to their benefit at the federal level, while being cognizant that state regulators are unlikely to take a step back in their AI enforcement efforts, even in light of the Trump Administration's current push for them to do so.

Texas AG Alleges Five Major TV Manufacturers Spied on Customers

On December 15, 2025, the Texas Attorney General ("TX AG") brought suit against five connected TV companies: Sony, Samsung, LG, Hisense USA Corp. ("Hisense"), and TCL Technology Group Corp ("TCL"), alleging that these companies "are watching you back" through TVs that "aren't just entertainment devices" but "a mass surveillance system sitting in millions of American living rooms."

The TX AG has alleged that the connected TV companies secretly monitor what consumers watch across streaming apps, cable, and other devices connected via HDMI, using a technology known as Automatic Content Recognition ("ACR"). The TX AG further alleged that the TV companies are then selling profiles of customers based on their content consumption. The TX AG asserted that: (i) most consumers "do not know, nor have reason to suspect" that their content is being monitored, used and sold in this way; and (ii) the use of ACR is unlawful because: (a) deceptive labeling leaves consumers unable to provide informed consent to ACR; (b) difficult opt-out mechanisms undermine consumer privacy choices; (c) users cannot reasonably understand the deployed surveillance model; and (d) the data collected about consumers and their content viewing is excessive and disproportionate to the disclosed purposes.

The TX AG made further allegations against the two companies that are Chinese-owned: Hisense and TCL. Specifically, the TX AG alleged that those companies' televisions are "effectively Chinese-sponsored surveillance devices," because Chinese law requires its companies to share user data "whenever the Chinese government requests it for whatever purpose."

The TX AG has also been successful in obtaining temporary restraining orders ("TROs") against two of the connected TV companies. Two days after filing suit against Hisense, the TX AG announced it had secured a first-of-its-kind TRO that would prevent Hisense from collecting, using, selling, sharing, disclosing, or transferring ACR data about Texans as the litigation continues. The TX AG also secured a similar TRO against Samsung, preventing the company from continuing to use, sell, transfer, collect, or share ACR data relating to Texas consumers.

Takeaway: The TX AG is serious about consumer privacy and will continue to be aggressive in its enforcement of privacy, AI and consumer protection laws, which will include taking on big tech. Even as the federal agencies adopt a more consumer-friendly approach, companies need to double-down on their efforts to comply with state laws, particularly when operating in high-risk areas. Further, companies based in China that receive personal information of Americans need to be on high-alert for potential enforcements and litigation at both the state and federal level, and will want to develop a defense strategy in advance in an effort to fend off potential claims and the associated reputational damage in the US market.

Regulatory Guidance Issued on AI Chatbots Under the UK Online Safety Act 2023

Ofcom, the regulator responsible for enforcing the UK's Online Safety Act ("OSA"), has published guidance on when AI chatbots fall within the scope of the OSA. Under the OSA, providers of user-to-user services (such as social media sites), search services and pornographic services must assess and mitigate the risks of harm to users, especially children. A chatbot that constitutes, or is integrated into, such a service can therefore be in scope.

Ofcom has now clarified that some chatbots or their outputs may be out of scope if they: (1) only allow interaction with the bot and no other users; (2) do not search multiple websites or databases when giving responses to users; and (3) cannot generate pornographic content. In addition, any AI-generated content shared by users on a user-to-user service is classed as user-generated content and would be regulated in the same way as content generated by humans.

Ofcom has encouraged in-scope providers to prepare now to comply with their duties, which include undertaking risk assessments, implementing proportionate mitigation measures, and enabling users to easily report harmful content. Key protective measures outlined in Ofcom's draft Codes of Practice include having a named person accountable for compliance, maintaining well-trained content moderation functions, using effective age assurance, and providing accessible reporting processes.

Takeaway: Use of AI chatbots to develop harmful content is an increasingly prominent and widespread issue. Ofcom's guidance provides some helpful clarity regarding applicability of the OSA to chatbots and organizations deploying AI chatbots will want to carefully assess whether their services meet the definitions of in-scope services.

First Draft Code of Practice on Transparent AI Systems Under the EU AI Act Published

The European Commission has published its first draft Code of Practice on Transparency of AI-Generated Content under the EU AI Act. The purpose of the Code is to support compliance with transparency obligations under the AI Act relating to marking of AI-generated content and labelling of deepfakes. While the AI Act transparency requirements are mandatory, the Code itself is a voluntary tool designed to assist compliance.

According to the Commission, the Code is designed to help organizations with marking in the required machine-readable and detectable manner, recognizing that there is currently no single technical solution and so multiple approaches will likely be needed. Techniques such as marking metadata, content watermarking, fingerprinting and structural marking are considered.

The draft Code was developed through an extensive multi-stakeholder consultation involving hundreds of participants from industry, academia, and civil society, including a public consultation with 187 written submissions and three workshops held in November 2025. The first draft, however, remains high-level and broad. Stakeholders can provide written feedback by January 23, 2026, with the second version of the Code expected to be published in March 2026 and further refined in subsequent iterations.

Takeaway: Although discussions regarding postponement of certain AI Act provisions are ongoing, organizations subject to the AI Act's transparency obligations will in the meantime be looking ahead to the current August 2026 deadline with some trepidation, especially if a final Code does not arrive until later in Q2. While the draft Code may well be subject to further change, in the interests of timing, organizations subject to the transparency provisions will want to carefully review the draft Code and consider their approach to AI transparency obligations in light of the current draft.

Dechert Tidbits

Getty Images Allowed to Appeal Secondary Copyright Infringement Claim Against Stability AI

The English High Court in Getty Images v. Stability AI has granted permission for Getty to appeal its secondary copyright infringement claim, which turns on whether an AI model can constitute an "infringing copy." In previously denying Getty's claim, the court reasoned that the AI model at issue cannot be classed as an infringing copy as it does not store or reproduce any Getty copyrighted works. However, in granting permission to appeal, the court noted this issue has not "previously been considered by any court" and reasonable minds can differ when it comes to statutory interpretation.

UK Data Regulator Publishes Response to the Cyber Security and Resilience Bill

The UK Information Commissioner's Office ("ICO") published its response to the Cyber Security and Resilience (Network and Information Systems) Bill. If passed, the Bill would expand the range of organizations in scope of cybersecurity legislation and strengthen regulators' powers. The ICO broadly supports the Bill and welcomed its enhanced costs recovery power but urged the government to create practical guidance to help regulated entities meet new incident reporting duties.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More