ARTICLE
7 April 2026

Hornet 7: AI Agents Shouldn't Go Where Humans Connot See

A
AlixPartners

Contributor

AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
The dominant safety assumption in AI has been "human-in-the-loop" - if a person is watching, the system remains controllable.
United Kingdom Technology
Rob Hornby’s articles from AlixPartners are most popular:
  • with readers working within the Retail & Leisure industries
AlixPartners are most popular:
  • within Intellectual Property topic(s)

It is only six months since I wrote about my agentic AI worries.

Since then, things have moved on to a new level, and I have new advice.

The dominant safety assumption in AI has been "human-in-the-loop" - if a person is watching, the system remains controllable.

But as agentic AI systems grow more complex, faster, and more interconnected, something important is becoming clear: in the most consequential scenarios, humans can no longer realistically stay in the loop.

We may need a new governance model that relies on other machines being in the loop while humans move above it.

I call this HOMIL.

You can find the LinkedIn newsletter version here: https://www.linkedin.com/newsletters/the-hornet-7376193049816625152/

To view the full article please click here.

Originally published by Imago Humanis.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More