4th February 2025
By Laura Crowley, 4 Pump Court.
The use of Artificial Intelligence (AI) in the preparation of written documents is an area of increasing importance and debate in all sectors, including the law.
In a recent judgment, Bradley & Chuang v Linda Frye-Chaikin [2025] CIGC (Civ) 5, the Grand Court of the Cayman Islands (Justice Jalil Asif KC) has given guidance to parties litigating in offshore jurisdictions on the use of AI, the duties of parties in proceedings where AI is deployed (both users of AI and recipients of documents prepared using AI), and the potential consequences of failing to check the accuracy of material generated by AI.
The underlying claim against the Defendant, Ms Frye, was for breach of an agreement to sell her condo unit in Grand Cayman. In September 2024, the Court granted summary judgment on the claim and the application to which this judgment relates was for a stay of that order pending the hearing of Ms Frye’s appeal. A stay was refused.
In its judgment, the Court addressed the content of some of Ms Frye’s written submissions, which it concluded had been prepared with the use of an AI tool and contained “a number of hallucinations[1] and erroneous material that does not assist the Court and risks leading it astray” (including reference to an Order in the Grand Court Rules which did not exist, citation of reported cases which do did not exist, and other errors). Although in this case, the relevant submissions had not been prepared by a legal representative, the Judge gave general guidance on the use of AI in submissions which applies to lawyers as well as litigants.
The Court at [20] referred (not by name), to the well-publicised case of Mata v. Avianca, Inc., 678 F Supp 3d 443, in which two New York attorneys were sanctioned for using non-existent case law authorities generated by ChatGPT. The failings in that case were identified as being, not the use of AI, which was not inherently improper, but the failure to check the accuracy and the lawyers’ attempt to hide their mistakes. The Court also cited the approach of the FTT in the English decision in Harber v HMRC [2023] UKFTT 1007 (TC), in which the applicant submitted a document to the tribunal referring to 9 purported authorities which did not exist. Justice Asif made clear at [22] that the same approach applied in the Cayman Islands and set out the potential consequences of using such AI tools inaccurately:
“There is nothing inherently wrong about using technology to make the conduct of legal disputes more efficient and their resolution speedier, including using AI tools. However, it is vital that anyone who uses such an AI tool verifies that the material generated is correct and ensures that it does not contain hallucinations -in other words that statutes, procedural rules and case law authorities that are referred to exist, and say what they are asserted to say, and that principles of law are accurately stated. Users of AI tools must take personal responsibility for the accuracy of material produced, and be prepared to face personal consequences, including the possibility of wasted costs orders, if the work product that they put forward to the Court is not accurate”.
The reasons set out by the Court at [23] for this approach echoed those in the US and English authorities cited: namely, that failure to take these obvious precautions when using AI tools results in:
“wasting the time of the opponents and the court; wasting public funds and causing the opponent to incur unnecessary costs; delaying the determination of other cases; failing to put forward other correct lines of argument; tarnishing the reputations of judges to whom non-existent judgments are attributed; and impacting the reputation of the courts and legal profession more generally.”
Finally, the Court emphasised the need for all counsel involved in the conduct of cases to be alive to the risk of errors and hallucinations in material generated by AI. Those using it must check their material carefully, but, equally, opponents need to be alive to the risk of erroneous material and prepared to challenge it. Justice Asif stated at [25] that, in his view, “as officers of the court… an attorney’s duty to assist the Court includes the duty to point out when their opponent is at risk of misleading the Court, including by reference to non-existent law or cases”.
Lawyers practising in many different jurisdictions are increasingly aware of the potential possibilities and pitfalls of AI.
The Bar Council of England and Wales published a document for barristers in January 2024 setting out “Considerations when using ChatGPT and generative artificial intelligence software based on large language models”.
This latest decision from the Grand Court of the Cayman Islands emphasises once again the need for lawyers to be alert to these issues, given their duties to the Court and professional responsibilities.
[1] “Hallucinations” is a term used to describe the phenomenon where the outputs generated by these tools may sound plausible but are either factually incorrect or unrelated to the given context.
For help and advice talk to a member of our clerking team. They can advise on the best options for your matter.
Call: +44 (0) 20 7842 5555