January 13, 2026
Education News Canada

UNIVERSITY OF BRITISH COLUMBIA
AI in the law: How UBC researchers are helping to future-proof justice

January 13, 2026

As artificial intelligence spreads through courts and law offices, questions about fairness, accuracy and accountability are multiplying. UBC researchers are working to make sure the legal system keeps pace, protecting the public while harnessing AI's benefits. 

The Peter A. Allard School of Law has launched a new initiative that aims to integrate AI safely and equitably into the legal system, while preparing lawyers for the ethical and practical challenges it creates. Funded by a $3.5-million gift from the estate of UBC law alumnus Gordon B. Shrum, the initiative will support faculty in developing new coursework covering AI regulation, liability, copyright, and surveillance and privacy risks. 

"The risks of AI in the legal system are numerous: overreliance on generative tools, fabricated digital evidence, intellectual property infringement and more," said UBC law lecturer Jon Festinger, who is helping to lead the initiative and teaching a new course as part of it. 

"But so too are the opportunities: AI could improve access to justice by providing free, basic legal advice or automating repetitive tasks, cutting time and costs for the public." 

AI regulation and legal accountability 

The new course, which began this winter term, includes community events to share insights with legal professionals, policymakers and the public. 

Festinger added: "We don't want law evolving haphazardly, by pretending this technological change isn't happening or reacting too late. We want to build a forward-facing, inclusive and legally sound set of rules and norms by which we govern ourselves." 

"We want to help ensure the law, as well as the legal community, are technology-informed. There are gaps created by these evolving technologies that we're all going to have to wrestle with as a society: How and what do we regulate? For instance, if someone uploads a harmful deepfake to a popular website, should the website host be criminally responsible?" 

The initiative joins other Allard efforts, including the UBC AI & Criminal Justice Initiative, headed by professor Benjamin Perrin, and another Perrin-led project examining AI use by police in Canada. 

Longer-term, the initiative could expand into specializations, interdisciplinary teaching, hands-on student placements with tech companies, and a new course on AI workflows in legal practice for legal professionals, in conjunction with UBC Extended Learning. 

Admissibility of digital evidence and risk of AI fabrication 

Dr. Moira Aikenhead, a lecturer at Allard Law who is part of the initiative, teaches students in her evidence course about challenges courts face when dealing with digital evidence. 

She noted that electronic documents before the court are technically required to be authenticated, but how and whether this is done is currently inconsistent. 

"In this landscape, fabricated evidence could be accepted as authentic, or genuine evidence could be ruled inadmissible based on allegations of fabrication. The legal system should ideally implement efficient, reliable methods to verify the authenticity of digital evidence," she said. 

Deepfake detection tools for courts and legal evidence 

Dr. Vered Shwartz, UBC assistant professor of computer science, is bringing a technical lens to these challenges. As a member of a new AI Safety Network launched by the Canadian Institute for Advanced Research, she is working on tools to detect synthetic content such as deepfakes and text hallucinations. The network is hoping to compile multiple AI detection tools into one offering. 

"We want it to be an iterative approach, so that as fabrication technology improves, so too do our detection methods. And while it won't be 100-per-cent accurate at detecting synthetic evidence, it's better than what currently exists, which is almost nothing. Plus, it could at least act as a deterrent," said Dr. Shwartz. 

Collaboration with legal experts is key to the network's aim. "Generative AI is getting better and more accessible everyone can create believable images for free. I am worried about the justice system because there's no real way to know what's true and what isn't anymore." 

Plans are afoot to meet with the Allard initiative members to share expertise and potentially collaborate in the near future. She added: "There has to be an ongoing conversation between AI researchers and experts using AI in different domains. If we want to tackle these difficult problems like detecting synthetic data, we have to work together." 

For more information

University of British Columbia
2329 West Mall
Vancouver British Columbia
Canada V6T 1Z4
www.ubc.ca/


From the same organization :
63 Press releases