CIFAR has launched its first two AI safety Solution Networks under the Canadian AI Safety Institute (CAISI) Research Program at CIFAR. The two research teams - Safeguarding Courts from Synthetic AI Content and Mitigating Dialect Bias (the latter co-funded by the IDRC) - will spend the next two years developing and implementing open-source AI solutions to make AI safer and more inclusive for Canadians and the Global South. Each network is awarded $700,000 to support their groundbreaking research and development.
The Solution Networks are funded through the CAISI Research Program at CIFAR, an independent, multidisciplinary research arm led by CIFAR. The dedicated research program is a core component of the Government of Canada's Canadian AI Safety Institute, launched in November 2024 with a $50 million investment to address the evolving risks of AI to Canadians.
"AI safety is crucial as the technology becomes more deeply embedded in how we live and work. At its core, it's about two things building trust and developing the tools to uphold it," says the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario. "Trust that AI will be used responsibly, and tools that make it safer, fairer, and more transparent. These new Solution Networks show how Canadian researchers are advancing the science of safety itself turning ideas into real solutions that make AI work for people."
"CIFAR's Solution Networks provide a unique approach to trustworthy AI research and development, bringing together exceptional teams of interdisciplinary researchers - who might not otherwise cross paths - to address issues of global importance, but more importantly, to design, develop and implement solutions," says Elissa Strome, Executive Director, Pan-Canadian AI Strategy at CIFAR. "Core to the work of both of these Solution Networks is exploring ways to mitigate the potential harms of AI to people in Canada and around the world."
Safeguarding Courts from Synthetic AI Content
Solution Network Members
- Ebrahim Bagheri, Solution Network Co-director (University of Toronto)
- Maura R. Grossman, Solution Network Co-director (University of Waterloo, Osgoode Hall Law School (York University), Vector Institute)
- Karen Eltis, Solution Network Member (University of Ottawa)
- Jacquelyn Burkell, Solution Network Member (Western University)
- Vered Shwartz, Solution Network Member (University of British Columbia, Canada CIFAR AI Chair, Vector Institute)
- Yuntian Deng, Solution Network Member (University of Waterloo)
Co-directed by Ebrahim Bagheri and Maura R. Grossman, this Solution Network aims to address the rising prevalence of synthetic AI-generated content in the justice system. This includes fake image or video evidence generated by people using AI tools, but also court documents that are created using large language models (LLMs) such as ChatGPT that may produce hallucinations.
"The issue now is that you can do this at scale and at convenience," Bagheri says. Previously, one would have to spend large amounts of time and money to forge evidence. Now, evidence can be doctored quickly and easily, and even fabricated entirely from scratch.
The stakes are incredibly high, says Grossman. "Somebody can go to jail or not go to jail depending on whether something is a real or fake video."
Somebody can go to jail or not go to jail depending on whether something is a real or fake video
Maura R. Grossman, York University adjunct professor, Osgoode Hall Law School
It's not always financially feasible to bring in an expert who can evaluate the provenance of AI-generated content or evidence. The team proposes to develop a free, open-source framework that anyone within the court system can use to identify potentially problematic content.
"We need a [transparent] tool that knows when it's not sure about its output. One that is user friendly for this very unique group of users including both self-represented litigants and officers in the court system," adds Grossman.
Their solution could have a huge impact on the efficiency and trustworthiness of a justice system that is facing a great amount of change in a short period of time. "Even if our solution isn't perfect, even if it gets 50, 60 or 70 percent of the way to be able to rule out [synthetic content], then we've really come a long way for the court system."
Solution Network Members
- Laleh Seyyed-Kalantari, Solution Network Co-director (York University, Vector Institute)
- Blessing Ogbuokiri, Solution Network Co-director (Brock University)
- Wenhu Chen, Solution Network Member (University of Waterloo, Canada CIFAR AI Chair, Vector Institute)
- Collins Nnalue Udanor, Solution Network Member (University of Nigeria)
- Thomas-Michael Emeka Chukwumezie, Solution Network Member (University of Nigeria)
- Deborah Damilola Adeyemo, Solution Network Member (University of Ibadan)
The use of LLMs like ChatGPT has exploded in recent years, but for speakers of non-standard English, these tools are not as safe or effective as they are for others. This is the problem Laleh Seyyed-Kalantari and Blessing Ogbuokiri are working to address.
Their Solution Network focuses on Nigerian Pidgin English, a language spoken by over 140 million people, primarily in West Africa. LLMs trained on standard English often misinterpret marginalized dialects like Pidgin as toxic or offensive and penalize the user. This can lead to very real harms like censorship on social media and discrimination in service-delivery systems.
I think what makes our solution unique is that it is locally rooted and culturally representative of citizens of African countries
Laleh Seyyed-Kalantari, York University assistant professor, Lassonde School of Engineering
The team will work to create the first ever bias and safety benchmarks for Pidgin English as part of an open-source audit and mitigation toolkit. These resources will be available for developers and policymakers to use to ensure AI systems are fair and safe for all users. "We are trying to create an AI system where marginalized voices can feel comfortable using these tools because it will accommodate them," adds Ogbuokiri.
The team will work with a citizen network in Nigeria, who will help to evaluate the data sets and LLMs used in the project. "I think what makes our solution unique is that it is locally rooted and culturally representative of citizens of African countries," explains Seyyed-Kalantari.
The team also has a policymaking objective, adds Seyyed-Kalantari. "We want to ensure that the research that we are developing [ ] brings actual positive changes for people who are using these LLMs in Africa."
Ogbuokiri notes the impact this project could have beyond West Africa for immigrant and Indigenous communities in Canada who also use non-standard English varieties. "This will serve as a vital public resource for researchers, developers and policymakers," he states. "This project will contribute to locally-grounded and culturally-relevant AI systems that reflect the realities of the Global South."
About the CAISI Research Program at CIFAR
The CAISI Research Program at CIFAR is a component of the Canadian AI Safety Institute, launched by Innovation, Science and Economic Development Canada. The CAISI Research Program at CIFAR is the scientific engine of a broad national effort that aims to promote the safe and responsible development and deployment of AI. The research program is independently leading Canadian, multidisciplinary research to find solutions to complex AI safety challenges and develop practical tools for responsible AI so that AI is safe for all Canadians.









