The Bar Council has updated its guidance on the use of ChatGPT, and other generative artificial intelligence (AI) large language model systems (LLMs), following recent case law at the High Court and the fast-paced development of the technology.

It concludes that barristers should make every effort to understand these systems and, if appropriate, use them responsibly as tools in their practice, ensuring accuracy and compliance with applicable laws, rules and professional codes of conduct.

The guidance, available on the Bar Council ethics and practice hub, sets out the key risks with LLMs: anthropomorphism; hallucinations; information disorder; bias in data training; mistakes and confidential data training.

In light of recent case law, we have updated our ethics guidance. Changes include:

  • Reference to new generative AI tools based on large language models (LLMs) such as Google’s Gemini, Perplexity, Harvey, and Microsoft Copilot
  • Explaining that the guidance applies to LLM software specifically aimed at lawyers
  • Highlighting that LLMs do not have a conscience or social and emotional intelligence
  • Reference to recent case law on the use of LLMs by lawyers
  • Reference to an academic study that assessed the reliability of AI legal research tools
  • Explaining that barristers can access authoritative legal resources at the Inns of Court libraries
  • The cyber security vulnerabilities that arise when using LLMs
  • That the ultimate responsibility for all legal work remains with the barrister
  • Restating the importance of respecting legal professional privilege, confidentiality, and complying with data protection regulations
  • Expanding on the interaction between intellectual property law and LLMs

 

Launching the guidance, Barbara Mills KC, Chair of the Bar Council, said:

“Recent case law, including the High Court judgment, emphasises the dangers of the misuse by lawyers of artificial intelligence, particularly large language models, and its serious implications for public confidence in the administration of justice.

“We recognise that the growth of AI tools in the legal sector is inevitable and occurring at a fast pace. As the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity. Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws.

“The public is entitled to expect the highest standards of integrity and competence in the use of new technologies from legal professionals.

“Our joint working group with the Bar Standards Board has begun its initial scoping work to identify how barristers can be supported to uphold standards with appropriate further training and supervision. Our guidance will be kept under review and will be updated as necessary to keep pace with the ever-evolving picture.

“We remind all practitioners that they must be vigilant and adapt as the legal and regulatory landscape changes.”

The guidance has been developed by the Bar Council’s IT Panel in consultation with the Bar Council’s Ethics Committee and Regulation Panel. It does not comprise legal advice and is not ‘guidance’ for the purposes of the BSB Handbook I6.4.