One of my strongest childhood memories was watching my mother volcanically lose her temper at the dentists.

Having rounded up me and my sister, forced us into shoes and coats, and walked us into town to the local surgery she presented herself at the front desk. "Hello, I'm Mrs Lumb, I have an appointment at 15:00".

"No. That's not right, there's nothing in the system for 15:00 today."

"Are you sure?"

"Yes, nothing in the calendar for that time."

"But you sent me a letter telling me that I had an appointment at 15:00 today. Look, I have it here."

"Well I don't know why you've got that. The calendar says there's no appointment at 15:00. The computer doesn't make mistakes."

That line, 'the computer doesn't make mistakes', and the quite remarkable losing of cool that it prompted in response has stuck with me for over twenty years. We never visited that surgery again.

But that attitude, that what a computer says must automatically be correct, and the idea that whatever data a computer might generate is to be automatically trusted over the word of a human (or even the contents of a written letter from the computer's owner) is one that has become ever more widespread in the same period of time. While most people seem entirely willing to concede that physical machines might sometimes spew out a defective product, many seem totally unwilling to contemplate the idea that intangible software 'machines' might sometimes do the same thing - spewing out a record which is simply wrong. Which is all the more remarkable to me given how often we watch computers crash, or experience similar glitches, right in front of our eyes.

Indeed, the attitude was one of the central drivers of the Post Office scandal, in which dozens of innocent sub-postmasters were wrongly convicted of crimes based on the erroneous outputs of the Post Office and Fujitsu's 'Horizon' accounting system. A travesty which my SMB partner Simon Goldberg has been considering extensively over the course of the ongoing public inquiry.

It's only right that in the wake of a miscarriage of justice on that scale that we all ask ourselves "how could it have been allowed happen?". One of the uncomfortable answers to that question is that, just over twenty years ago, the law was allowed to become too trusting of computer generated evidence. Without which, history might have been very different.

How so? Well, once upon a time the criminal courts had a natural suspicion of computers and were bound by the Police and Criminal Evidence Act 1984 to see them as an unreliable new technology which a party needed to prove was 'operating properly' and had not 'used improperly' in order to allow any evidence generated by them to be admitted.

That all changed in 1999 when the rules of evidence were amended by section 60 of the Youth Justice and Criminal Evidence Act 1999 (which came into force in April 2000). That act softened the position down to the regular Common Law rule that evidential records (in this case computer generated ones) are to be presumed to be accurate and may be therefore be admitted as evidence unless the other party can produce evidence to the contrary.

The Horizon IT system was introduced in 1999, with prosecutions featuring its evidence commencing at almost the exact same time as the rules on computer generated evidence were changed.

A remarkable coincidence, you might think. That the UK's largest miscarriage of justice, which was predicated almost entirely on faulty computer generated evidence, began in the exact same year as the rules of evidence changed so as to make it easier to admit computer generated evidence in criminal trials. Could prosecutions on that scale really have succeeded if the Post Office had been required to demonstrate that its Horizon system was 'operating properly' in order to bring evidence from it? It seems highly unlikely to me.

So bear that in mind as you listen to the outcomes of the Post Office Inquiry. Clearly it was an organisation run abysmally and its powers to bring private prosecutions need serious reform (or abolishment). No doubt the Inquiry will make a number of recommendations to put those matters right.

But is there a wider question for the British legal system to consider that goes beyond the Inquiry's remit? Has our legal system just been shown to be unduly trusting of computer generated evidence, and is it not inevitable that future prosecutions predicated on inaccurate evidence from computers will be brought, and will succeed, unless the law is changed back? Do we really need another case like Horizon before we stop automatically trusting computer records?

Bluntly, we shouldn't wait to make the same mistake twice. Parliament should, as a matter of urgency, consider undoing the change that it made in 1999 and bringing back rules of evidence that either bar or put a question mark over computer generated evidence used in criminal prosecutions unless the party seeking to rely on that evidence can demonstrate that the system which generated it is accurate. They could use the original wording from the PACE 1984, or adopt a revised version that has been updated for the modern day. But either way, if we've learned nothing from the Post Office Scandal it must surely be that evidence from systems like Horizon should not be sending people to prison without first being shown to be trustworthy.

Because, as my mum could have told you twenty years ago, computers really can make mistakes.

The Horizon IT system was introduced in 1999, with prosecutions featuring its evidence commencing at almost the exact same time as the rules on computer generated evidence were changed.

1420996a.jpg

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.