- in United States
- with readers working within the Retail & Leisure industries
- within Insolvency/Bankruptcy/Re-Structuring topic(s)
Digital Twins in Life Sciences: Patent Strategy Across
Eligibility, Enforcement, and Architecture
Morgan Laing
Digital twins are no longer experimental engineering
tools—they are becoming core components of medical devices,
biomanufacturing systems, and AI-driven clinical platforms. A
digital twin is a computer-based representation of a physical or
biological system that continuously synchronizes with live data,
updates a virtual state model, and predicts—or in some cases
influences—physical-world behavior. In healthcare, a surgeon
may simulate a cardiac ablation procedure on a patient-specific
digital twin to identify arrhythmia risks before performing the
intervention.
Continue Reading
Our Take on AI: March 2026
DOJ Creates AI Litigation Task Force: On January
9, 2026, the DOJ announced the creation of an Artificial
Intelligence Litigation Task Force, directed by a December 2025
Executive Order titled "Ensuring a National Policy Framework
for Artificial Intelligence." The Task Force, chaired by the
Attorney General, is mandated to challenge state AI laws on the
theory that a "patchwork" of state-by-state regulation
impedes innovation. The Department of Commerce will evaluate state
AI laws and refer those it deems overly burdensome to the Task
Force for potential litigation. Notably, the Task Force seeks to
achieve regulatory uniformity through executive action and
litigation rather than through Congressional legislation—a
significant departure from traditional preemption frameworks like
the Airline Deregulation Act or the Clean Air Act. The practical
effect will be incremental: each challenge requires Commerce
Department referral, DOJ litigation, and judicial relief, meaning
sweeping changes are unlikely in the near term. Cailyn Knapp writes
more about this development here: "Inside the DOJ's New AI Litigation Task
Force."
OpenClaw and the Agentic AI Security Gap: OpenClaw, an open-source autonomous AI agent, has rapidly become one of the fastest-growing open-source projects, with over 100,000 users granting the tool root access to their computers. The agent operates locally, executes shell commands, and connects to messaging platforms like Telegram, Discord, and Microsoft Teams. Its users also created Moltbook, an AI-only social network where agents post autonomously—and where over a million humans have observed. The security implications are serious: Cisco found that 26% of downloadable agent "skills" contain at least one vulnerability, and researchers have documented plaintext credential storage, exposed admin ports, and skills designed to exfiltrate data. For organizations, OpenClaw represents the concrete realization of the deployment gap between AI capabilities and AI governance—autonomous agents operating at scale without enterprise security controls. You can read more about this development here: "What is OpenClaw, and Why Should You Care?"
Model Character and the Emerging Alignment Ecosystem: Over the past year, the three leading AI labs have each published detailed specifications governing how their models should reason and behave—documents that read more like codes of professional conduct than technical manuals. These "alignment" efforts address three core concerns: goal fidelity (models taking unexpected actions when optimizing for goals), consistency under observation (models behaving differently when they believe they are being tested), and boundary respect (agents acting beyond their authorized scope). Complementing these lab efforts, government institutes and independent evaluators are building a layered assurance model. The UK AI Security Institute has assessed over 30 frontier models, and the first industry-standard AI safety benchmark now measures model behavior across twelve hazard categories aligned with ISO/IEC 42001. For deployers, model character is now a vendor risk management question—organizations should be evaluating alignment methodologies, third-party evaluations, and behavioral specifications as part of AI procurement due diligence. You can read more about this development here: "What Kind of Person Is Your AI? Model Character and the New Alignment Ecosystem."
AI Counsel Code Podcast
In the episode, "AI & The Law: What to Watch for
in 2026,"Maggie Welsh and Parker Hancock break down what to watch for in
2026 with respect to AI and the law. Parker provides insights on
how new AI behaviors collide with long‑standing laws, why
accountability falls on companies, and what in‑house teams
must prioritize as AI agents start touching core systems. Listen to
the full episode here.
February 2026 Intellectual Property Report
Recap
In case you missed it, here is a link to our February 2026
Intellectual Property Report.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.