The hotly anticipated annual Bar Council Law Reform Lecture grappled with AI and lawyers.
I was hosted for dinner that night by the Bar Council’s Law Reform Committee. It was a pleasure to continue what we’d discussed at the lecture reception about how AI increasingly is used in forensic decision-making. Biased and discriminatory collection of data sets is just one of the concerns that all lawyers readily must address.
This year’s celebrated lecturer, the Master of the Rolls Sir Geoffrey Vos, a former Chair of the Bar Council, is a great supporter of reform. A strong opening line to his lecture paints a picture of the rest:
“When I started at the Bar, technology was a typewriter, foolscap paper and a bottle of Tippex.” Cue: the personal computer.
The lecture dealt with guiding principles from a legal and judicial perspective about adopting technologies. Lawyers owe a duty to use technologies that provide a quicker means of determining issues or resolving disputes, whether in courtrooms or tribunals or other forums. Vindicating legal rights quickly is central to the rule of law.
Minimising cyber-crime is a must. Newly available and untested technologies raise questions about automated collection and determinations. Art. 22 (1) GDPR concerns automated individual decision-making, including profiling: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
Sir Geoffrey Vos now chairs the new Online Procedure Rule Committee. The Ministry of Justice announced the committee in January this year and it aims to guide judges, legal representatives and litigants through online court procedures. The committee has recently met for the first time to discuss governance of the digital justice system in courts and tribunals per the Judicial Review and Courts Act 2022.
The long title of the Bill is instructive: “A Bill to Make provision about the provision that may be made by, and the effects of, quashing orders; to make provision restricting judicial review of certain decisions of the Upper Tribunal; to make provision about the use of written and electronic procedures in courts and tribunals; to make other provision about procedure in, and the organisation of, courts and tribunals; and for connected purposes.”
Convening a day after the summer solstice, the audience was treated to the cool environs of the Ashworth Centre suite at Lincoln’s Inn. Chaired with a velvet glove by the head of the Bar Council’s law reform committee, the lecture was titled ‘Artificial intelligence and virtual worlds: The future of law’. The billing for the Master of the Rolls promised that the twentieth annual Law Reform Lecture would explore the rapidly evolving role of types of artificial intelligence (AI) in law and challenges for law in a virtual world.
Several themes in this lecture had been prominent in Sir Geoffrey’s lecture in April this year, also given at Lincoln’s Inn: ‘The future of London as a pre-eminent dispute resolution centre: opportunities and challenges.’ In the opening minutes of April’s lecture, Sir Geoffrey had prophesied London as a pre-eminent dispute resolution centre for reasons including its lawyers’ grip on legal technologies: “[G]enerative artificial intelligence is now openly available to all, including lawyers and can be expected to play a major role in future types of dispute resolution.’
We might have heard quite a different lecture a year ago – or even before November 2022 when an American AI research laboratory, OpenAI, launched ChatGPT. Then, the press reported various commentators who said that ChatGPT would make lawyers redundant. That’s just not the case.
(How) should we regulate AI?
Perhaps lawyers first want to know which of the different forms of AI should be regulated. Or might we (wish to) treat AI like any other technology? Take a couple of examples. Whilst a power station must be regulated, does using AI to operate a power station necessarily give rise to separate and further risks? There’s a compelling case that we should regulate self-driving cars; but is that because we’re concerned that self-driving cars use AI? Unlike ordinary computer technologies, at the outset one cannot exhaustively know – let alone test – AI technologies. Might we plausibly distinguish between regulating AI and merely optimising or monitoring sheer commercial efficiencies?
It’s not clear that we obviously need to regulate every sort of AI system. Perhaps we should first look to regulating governments rather than technologies.
Should we regulate outcomes or processes – or both? If a process delivers the right outcome, and we’re reasonably confident that that’ll happen, are we concerned about regulating the process? Lawyers surely aren’t wrong about why it’s imperative to have a clear framework for regulating AIs. That’ll be essential to instilling guarded optimism about new technologies.
Abigail Bright is a barrister at Doughty Street chambers and a member of both the Bar Council Ethics Committee and the IT Panel. She is a director and trustee of the Incorporated Council of Law Reporting for England and Wales and a guest lecturer at the Institute of Psychiatry, King’s College, London.
Watch the Law Reform Lecture with the keynote speech by Sir Geoffrey Vos, followed by an expert panel discussion withJamie Susskind, Shobana Iyer, and Dr Matthew Lavy KC, chaired by Eleena Misra KC.