School shootings are devastating events, and communities often look for ways to prevent them.
Others are reading now
In Canada, one of the deadliest recent attacks has raised questions about the role of technology and artificial intelligence in spotting dangerous behavior before it happens.
Account suspended
Canada’s artificial intelligence minister has summoned representatives from OpenAI after the company failed to alert police about a user whose account was suspended over violent content, reports the Guardian. The user, Jesse Van Rootselaar, went on to commit one of the country’s worst school shootings.
Van Rootselaar, 18, killed eight people in Tumbler Ridge on February 10. Five of the victims were students aged 12 to 13, and one was a 39-year-old teaching assistant. Before attacking the school, she also killed her mother and half-brother at home.
In June 2025, Van Rootselaar had described violent scenarios involving guns to ChatGPT. The system flagged the activity, but OpenAI decided the account did not show “credible or imminent planning.” The account was banned, but Canadian authorities were not notified.
Minister Evan Solomon said he was “deeply disturbed” by the reports. He has arranged a meeting in Ottawa with OpenAI’s top safety staff to review how the company decides to escalate cases to law enforcement. “We will have a sit-down to understand their safety protocols and thresholds for alerting police,” Solomon said.
Also read
Practiced in Roblox
OpenAI told the Wall Street Journal that after the shooting, employees contacted the Royal Canadian Mounted Police (RCMP) to share information about Van Rootselaar. She had also used the game Roblox to create a virtual mall with weapons where players could shoot each other before the attack.
British Columbia’s government confirmed that OpenAI met with officials the day after the shooting. But the company did not reveal that it had suspended Van Rootselaar’s account months earlier for violent content. It was only two days after the shooting that OpenAI reached out to the province for help contacting the RCMP.
Premier David Eby called the revelations “profoundly disturbing.” He said the families’ pain was “unimaginable” and added that knowing the company had prior intelligence before the attack was deeply troubling for everyone in British Columbia.
The Canadian federal government is now considering how AI chatbots should be regulated, especially regarding minors, and what responsibilities companies have to alert authorities when users show violent behavior.
Sources: The Guardian