Chat GPT: legal decision making and consideration of its reliability

The Master of the Rolls, Sir Geoffrey Vos, recently gave a lecture for the Law and Technology conference on 14th June 2023 exploring the effect that generative AI is already having, and is likely to have, on legal services and courts.

The speech began with a story about a New York lawyer, Steven Schwartz who misused the artificial intelligence chatbot, ChatGPT, relying on it for research for a personal injury case. The judge in that case took the trouble to look up the cases cited by Schwartz, only to find they were non-existent. An interesting aspect of this, was that Schwartz asked ChatGPT whether one of the cases was a “real case” and it replied that it was.

Schwartz has now been fined $5,000 for submitting the fake citations, as he “consciously avoided” signs the cases being used as examples were fake, therefore acting in bad faith and misleading the court.


This was explored in Sir Geoffrey Vos’ speech, who considered how we are going to have to develop mechanisms to deal with the use of generative AI within the legal system. The Schwartz story highlights that one thing generative AI cannot do effectively for lawyers is to allow them simply to cut corners.

Takeaway points from this lecture for lawyers:

  1. Costs: clients are unlikely to pay for assistance they can get for free. If briefs, for example, can be written by ChatGPT and Spellbook and checked by lawyers, clients will presumably apply pressure for that to happen if it is cheaper, and saves some of an expensive fee earners' time.
  1. Regulation: one can envisage a rule or a professional code of conduct regulating whether and in what circumstances and for what purposes lawyers can: (a) use large language models to assist in their preparation of court documents, and (b) be properly held responsible for their use in some such circumstances.
  1. Checking of facts: lawyers using generative AI need to be savvier in checking their facts. Even Chat GPT, when describing what it can do, concludes “however, it is essential to note that ChatGPT is not infallible, and there is always a degree of uncertainty involved in legal decision-making”.
  1. AI needs to be specifically trained: if GPT-4 (and its subsequent iterations) is going to realise its full potential for lawyers in providing accurate legal advice, accurate predictions of legal outcomes and accurate assistant with dispute resolution processes, it is going to have to be trained to understand the principles upon which lawyers, courts and judges operate. In the meantime, court rules may have to fill the gap.
  1. Confidence in the system: the limiting feature for machine made decisions is likely to be the requirement that the citizens and businesses that any justice system serves have confidence in that system.

Sir Geoffrey Vos stated he did not think the future use of AI will make the work of lawyers and judges redundant. However, he did think AI will be used within the digital justice systems of the kind we are creating in England and Wales. AI may also at some stage, be used to take some (at first, very minor) judicial decisions. However, Vos used the example of the welfare of children as being the decisions that humans are unlikely to ever accept being decided by machines. Family lawyers may breathe a sigh of relief and agreement.

Consideration of AI and the law is not a new topic; however Chat GPT-4 was released by OpenAI on 14 March 2023 and is already entering the legal profession in an accessible way. Professor Richard Susskind, technology adviser to the Lord Chief Justice, is a futurist that many will have heard of for his well-known theories on technology and the courts In a recent article with The Times, Susskind reflects on Chat GPT-4 and how its great significance lies not in what it is today, but in what it is likely to become.


A record of Sir Geoffrey Vos’ lecture can be found here: