Eight months before the Tumbler Ridge mass shooting, OpenAI knew something was wrong. The company's automated review system had flagged Jesse Van Rootselaar's ChatGPT account for interactions involving scenarios of gun violence. Roughly a dozen employees were aware. Some advocated contacting police. Instead, OpenAI banned the account, but didn't refer it to law enforcement because it didn't meet the "threshold required" at the time.
On Feb. 10, Van Rootselaar killed eight people (her mother, her 11-year-old half-brother and six others at Tumbler Ridge Secondary School) before dying of a self-inflicted wound.
This case is not simply about one company's misjudgment. It exposes the absence of any Canadian legal framework for assigning responsibility when an AI company possesses information that could prevent violence.
As a researcher in health ethics and AI governance at Simon Fraser University, I study how algorithmic systems reshape decision-making in high-stakes settings. The Tumbler Ridge tragedy sits squarely at this intersection: a private corporation made a clinical-style risk assessment it was never equipped to make, in a legal environment that gave it no guidance.







