It's not often that the art world intersects with technology
law. But that's exactly what happened when artist Helen
Knowles staged a performance of
The Trial of Superdebthunterbot at the Zabludowicz Collection
in north London on 26 February.
"A debt collecting company, Debt BB buys the student
loan book from the government for more than it is worth, on the
condition it can use unconventional means to collect debt. Debt BB
codes an algorithm to ensure fewer loan defaulters by targeting
individuals through the use of big data, placing job adverts on web
pages they frequent. Superdebthunterbot has a "capacity to
self-educate, to learn and to modify it's coding sequences
independent of human oversight" (
Susan Schullppi, Deadly Algorithms). Five individuals have died
as a result of the algorithm's actions, by partaking in
unregulated medical trials. In the eyes of the International Ether
Court, can the said algorithm be found guilty?"
The algorithm has realised that unregulated and dodgy jobs
generate cash quicker, and steered the vulnerable defaulters
towards such jobs. Debt BB is insolvent and the original programmer
has died. The case has been brought to the International Ether
Court under the Algorithm Liability Act, with Superdebthunterbot
standing accused of gross negligence manslaughter.
Participants watched a film of the trial, and then the jury sat
down to deliberate (ably aided by audience contributions). The jury
was comprised of artists, technologists, legal academics, a
futurologist and a Kemp Little Commercial Technology lawyer
Initially, the jury found it difficult to integrate the premise
that an algorithm could be liable for a crime. In the end,
Superdebthunterbot was granted a second chance at life, as there
were 5 votes for guilty and 7 votes for not guilty. However, the
discussion brought out a number of interesting themes:
The emotional and intellectual
difficulties with applying a human-based code of ethics (the law)
to machines. The concept of negligence appeared to translate fairly
well to independent thinking machines, as the concept of
"reasonable foreseeability" is an objective standard, and
doesn't require analysis of any mental state. However, there
was a divide between the emotional reaction judging the behaviour
as morally wrong and the intellectual desire to impute such
behaviour to a rational agent.
The purpose of punishment, which is a
live and controversial debate within human society. The jury was
only asked to establish the algorithm's liability, as
sentencing would be left to the judge, but what would be the point
of punishing a machine? How would any potential 'Algorithm
Liability Act' approach the competing strands of punishment:
rehabilitation and prevention, retribution; restorative justice
(i.e. helping victims overcome the crime) or even redemption?
The difficulty differentiating
between an algorithm, as a piece of code, and its physical
implementation in a machine or network. It would have been much
easier to find the Superdebthunterbot algorithm liable if it was
embodied in a humanoid robot, but it's much more difficult to
do that when the algorithm operates across a network of disparate
machines operated by third parties.
Regulation was a recurring theme.
What would this involve? How do we move beyond and improve upon
Asimov's laws? How do we ensure compliance, once the human
owners or creators are dead or insolvent? How can regulators keep
up with an increasingly complex area of technology? How can the
public have meaningful oversight and understanding of the
algorithms and the regulators?
If Artificial Intelligence can be
legally responsible for its actions, is a sufficient level of
reflexivity or self-understanding required? Jurors drew parallels
between legal responsibility for children, where at a certain age
individuals are deemed by the law to be responsible for their
actions. How would any such maturity level for an algorithm be
This scenario was not far removed
from reality today. It was acknowledged by the jury that this was
already happening, although in a less visible way. The value of
this piece of art was to make visible and crystallise issues that
are already out there. Is the Artificial Intelligence the problem,
or is the real issue the conversion of humans into data and then
the paternalistic manipulation of the humans in a technical and
Despite their differences, there was one thing that every single
juror agreed upon: any liability for the Artificial Intelligence
must not in any way let the human owners, operators and creators
off the hook: a reminder that we are all responsible for the
future. Has the weight of freedom ever been so great?
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
To print this article, all you need is to be registered on Mondaq.com.
Click to Login as an existing user or Register so you can print this article.
The English Commercial Court has published two recent judgments of Mr Justice Popplewell in a single anonymised case concerning the removal of two arbitrators under section 24(1)(d)(i) of the Arbitration Act 1996.
As a condition to challenging enforcement or recognition of an arbitration award, the UK Supreme Court overturned a Court of Appeal decision which imposed a $100 Million security obligation on a New York Convention arbitral award debtor...
Register for Access and our Free Biweekly Alert for
This service is completely free. Access 250,000 archived articles from 100+ countries and get a personalised email twice a week covering developments (and yes, our lawyers like to think you’ve read our Disclaimer).