By: Robert Hamilton
Cases Commented On: Reddy v Saroya, 2025 ABCA 322 (CanLII)
PDF Version: The Alberta Court of Appeal Weighs in on the use of AI in Court Submissions
In Reddy v Saroya, the Alberta Court of Appeal had the opportunity to comment on the use of Artificial Intelligence in court submissions when considering a case wherein counsel had filed a factum containing multiple AI-fabricated citations. This is the latest warning for lawyers, following cases such as Zhang v Chen, 2024 BCSC 285 (CanLII) (Zhang) and Ko v Li, 2025 ONSC 2965 (CanLII) (Ko), that AI and Large Language Models (LLMs) cannot reliably prepare legal materials and should not be used to that end. These tools can be used to gain efficiencies and will surely have an increasingly important role in legal practice moving forward, but lawyers, legal academics, and law students who use them must understand their limits. Reddy shows the cost of forgetting that.
Reddy dealt with a civil contempt finding against the appellant, Parminder Saroya (Reddy, at para 1). For current purposes, what is notable is the court’s detailed discussion of counsel’s misuse of an LLM to prepare the appellant’s initial factum, which cited fabricated cases. The ABCA held that the lawyer whose name is on filed document is responsible for its contents, even if a third-party contractor was used for drafting (Reddy, at para 83).
The issue was first identified by the respondent’s counsel, who noted in their initial factum that the appellant’s submissions “did not include hyperlinks or copies of the cases relied on, and that seven of the cases cited by the appellant could not be found, including six that were purportedly decided by this Court.” (Reddy, at para 74) The respondent suggested that these cases likely did not exist and noted the significant amount of time and effort spent on searching for them.
This is reminiscent of Zhang, where special costs were sought against counsel based on “the costs incurred by MacLean Law to try to find and then expose the AI ‘hallucination’”
(Zhang, at para 25). In Reddy, the appellant’s counsel was assured by the hired contractor that an LLM had not been used, though they later concluded this may not have been true (Reddy, at para 75). The Appellant’s counsel asked permission to file an amended factum and offered several explanations for failing to verify the citations in the document, including: time constraints due to late delivery of the document from the contractor, illness, being very busy, and the holiday season.
The Court of Appeal took this opportunity to comment on the use of AI in court submissions. The Court noted that the Law Society of Alberta’s Code of Conduct requires lawyers to perform legal services to the standard of a competent lawyer, which includes developing an understanding and ability to use relevant technology (Reddy at para 80; see also Law Society of Alberta Code of Conduct Rule 3.1-2). This requires that lawyers “should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities. A lawyer should understand the benefits and risks associated with relevant technology” (Code of Conduct Rule 3.1-2 commentary [5]). The Court also noted that the Law Society of Alberta has also provided guidance to lawyers, publishing “The Generative AI Playbook,” which stresses that lawyers using large language models must understand the potential benefits and risks. The Playbook states:
Even though these products can assist in retrieving relevant case law, statutes and legal articles from vast databases, there are notorious examples of how they have filled gaps in data by making up names, dates, historical events and even legal cases. If pressed, they may even write the cases themselves.
Recognizing that this happens, any lawyer using Gen AI for substantive legal work must proceed with caution and ensure that they independently verify all information generated by the platform. Lawyers should never rely on Gen AI to judge its own accuracy. (The Generative AI Playbook, Key Risks of Using Gen AI, Hallucinations and Legal Research)
This should by now be readily apparent. But as this case illustrates, it can simply no longer be assumed that legal materials will not contain false materials, and every citation must be checked manually where third parties are relied on for drafting. As the Court wrote “the time needed to verify and cross-reference cited case authorities generated by a large language model must be planned for as part of a lawyer’s practice management responsibilities” (Reddy, at para 83).
The Court also highlighted its own 2023 Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions When Using Large Language Models.
The Notice highlights the paramount importance of maintaining the integrity and credibility and encourages practitioners and litigants to be careful when citing legal authorities or analysis that comes from LLMs in their filings. Parties should depend entirely on authoritative sources and ensure there is always a “human in the loop”; that is, all AI-generated submissions should be verified through with substantial human review. This verification process can be accomplished by checking references against trusted legal databases and confirming that the citations and their substance can withstand examination.
Verification is part of the lawyers’ professional responsibilities: when used without safeguards “large language models frequently introduce confusion and delay into proceedings, and worse, constitute an abuse of process that may potentially bring the administration of justice into disrepute” (Reddy at para 80). Similarly, the Court in Ko held that “Irrespective of issues concerning artificial intelligence, counsel who misrepresent the law, submit fake case precedents, or who utterly misrepresent the holdings of cases cited as precedents, violate their duties to the court” (Ko at para 14). And, in Zhang: “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.” (Zhang at para 29) As a result, the Court in Reddy held that “the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents, as well as ensuring compliance with the October 2023 Notice.” (Reddy at para 83) This is the case even if a third party was engaged for drafting. Notably, this also extends to self-represented litigants.
The Court clearly indicated to courts below that they should treat this matter seriously, holding that “counsel and self-represented litigants should not expect leniency where they have failed to adhere to clear and unambiguous requirements [of the October 2023 Notice].” (Reddy, at para 84) Meaningful sanctions, which may include, but are not limited to “striking submissions or imposing some form of cost award”, may be required to maintain the “integrity and credibility of court processes” (Reddy, at para 84). This may even include “initiating contempt proceedings or a referral to the Law Society of Alberta” (Reddy, at para 84). In the present decision, the respondent argued for an enhanced cost award was warranted due to abuse of process and the cost incurred in researching and searching for non-existent cases. The Court invited further submissions on the issue as they will consider whether to impose a cost award to be paid by appellant’s lead counsel (Reddy, at para 87). To date, Canadian courts have been relatively lenient when dealing with these situations. As the language of the ABCA suggests, though, this may not be the case for much longer.
None of this is to suggest that the use of AI is necessarily problematic. There are many powerful tools available, and surely many more on the way, that may help legal professionals work more efficiently and creatively, potentially reducing costs and increasing access to justice in some areas. Some of the potential benefits the Law Society of Alberta outlines include client relationship management, trial preparation, document generation, summarizing legal documents and precedents, and practice management. There are also, however, many risks. With these risks and benefits in mind, law schools need to find effective ways to train students in the responsible use of AI and law firms need to provide training and support to practitioners. The pressures identified in Reddy, such as time constraints, illness, and unreliable contractors, will always be an issue in legal practice. So too will the temptation to cut corners. As the decision in Reddy suggests, courts may consider increasingly severe consequences where such shortcuts lead to court submissions including fake sources and erroneous citations.
This post may be cited as: Robert Hamilton, “The Alberta Court of Appeal Weighs in on the use of AI in Court Submissions” (15 October 2025), online: ABlawg, http://ablawg.ca/wp-content/uploads/2025/10/ Blog_RH_Reddy.pdf
To subscribe to ABlawg by email or RSS feed, please go to http://ablawg.ca
Follow us on Twitter @ABlawg