Harold Thimbleby Law reform driven by understanding computers.jpg

 

Professor Harold Thimbleby has over 500 publications in computer science. He is Professor Emeritus of Gresham College and Swansea University and was a Royal Society-Wolfson Research Merit Award holder. 

 

 

 

In this guest blog for the Bar Council, Professor Thimbleby calls for a new regulatory system for computers.

Computers do amazing things, from keeping people alive with heart implants to exploring outer space and managing our law. When computers are so impressive, it seems intuitive that they must be very reliable — so why waste time questioning their reliability in court? Thus, the Common Law presumption that computer evidence is reliable speeds up litigation and helps avoid technical examination of computer system reliability and of the reliability of evidence they have collected, stored, processed and summarised for the court.

The Post Office Horizon scandal occurred when an unreliable accounting system, Fujitsu’s Horizon, and management in denial collided with regulated accountancy, creating the false story that sub-postmasters had been fraudulent. The resulting scandal highlighted problems with computers and specifically with the naivety of the Common Law presumption. Recognising the limitations of the presumption, in January 2025, the Ministry of Justice put out a call for submissions on the use of computer evidence.

Computers are completely different to any other technology. We have simply not got used to them, nor worked out how society can think clearly about them. The result is that we believe the solution to our current bad, inefficient systems is always buying our way out with new, innovative systems. We keep looking to a more exciting future, rather than looking back and understanding why things never work out as well as we hope.

We end up with results like the Post Office scandal and HM Courts & Tribunals Service (HMCTS), with its IT failure cover-up affecting over 600 cases, where affected people haven’t been informed because to do so would “cause more harm than good.”

Computer programs are the most complex things ever made, and the details of computer systems are impossible even for experts to understand. Microsoft Excel alone has millions of lines of code, and it depends on lots of other programs, as well as depending on the reliability of spreadsheets devised by often unqualified users.

A clear example of the problem of unreliable computing is apparent in software warranties. While CrowdStrike’s software crashed 8.5 million computers worldwide in July 2024, causing widespread chaos, its software was sold under a warranty:

… THE SOFTWARE AND ALL OTHER CROWDSTRIKE OFFERINGS ARE PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND … TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW CROWDSTRIKE SHALL NOT BE LIABLE …

Similar ‘warranties’ (often in capitals) are present on all software, including AI.

With software being so unreliable and with manufacturers refusing to take responsibility, it is incredible that we have a Common Law presumption that computer evidence is reliable.

The Ministry of Justice published a 2025 policy paper, with a foreword by Lord Timpson, on its plans to use AI, which included phrases like: “AI shows great potential to help deliver swifter, fairer, and more accessible justice for all” “AI holds tremendous promise”; and “as AI technologies mature…”

I would have expected more critical claims or references to audits, evidence, spending or research studies.

But why plan to adopt anything the MOJ admits is immature?

Moving forward requires raising technical competence at all levels, from in-house IT staff to managers to policy makers, and changing culture, especially in the MOJ and in court procedures.

I propose a framework of seven interrelated ideas:

  1. For computer evidence to be accepted in courts, computer systems need both certificates signed by qualified persons that the evidential processes followed were of forensic standards, and certificates that clearly warrant all relevant systems are fit for purpose.
  2. Software needs certificates of reliability, like we have MOTs for vehicles. Current warranties show that computer systems and the evidence they process are not reliable.
  3. Developers and IT managers need professional accreditation.
  4. Certifiers need accreditation, and they need to be up to date and competent.
  5. Both the police and expert witnesses need accreditation that they are competent to understand and handle computer evidence.
  6. Future-proofing the framework will be necessary as innovative technologies will endlessly appear and disrupt
  7. We must improve the quality of software engineering education, too, as well as the quality of professional accreditation (points 3, 4, 5), to meet the standards needed to ensure reliable computer evidence.

Inverting the presumption will enable courts to efficiently exclude unreliable evidence. Insurance companies will no doubt step in to provide a bridge between the reliability that courts and customers more generally want and what manufacturers are providing.

We’ve been here before, and the Government decided to do something about it in law. The Medical Act 1858 starts: “Persons requiring medical aid should be enabled to distinguish qualified from unqualified practitioners.” The 1858 Act stopped unqualified people masquerading as doctors and harming patients.

NHS computers now make medical decisions, and robots perform operations. Clearly, incompetent computer developers and their computer systems can do far more damage to far more people far faster than quack doctors can, yet they aren’t regulated. But there are numerous examples of computers failing and harming people in all areas, not just in healthcare: the Post Office Scandal, Universal Credit system flaws, bugs in driverless cars, cyberattacks and much more.

Why, then, don’t we have a Digital Act that declares “Persons requiring digital systems that are fit for purpose should be enabled to distinguish qualified from unqualified practitioners”? That would solve the key problem.

The AI systems proposed by Lord Timpson for the MOJ will not be fit for purpose unless there are urgent cultural changes. If a Digital Act seems unrealistic, it should at least start with the MOJ itself adopting the framework above.