If OpenAI had reported Van Rootselaar to authorities, that would set a precedent compelling OpenAI to report all similar threats, the lawsuits alleged. Handling that alleged volume of incidents would supposedly require a dedicated law enforcement referral team, while OpenAI would likely take a reputational hit for reporting ChatGPT users to cops. For these reasons, OpenAI was allegedly desperate to hide Van Rootselaar’s logs.
Since whistleblowers outed OpenAI’s mistake, cops have gotten access to the shooter’s logs, but families and their legal team have not, Edelson confirmed. Instead, OpenAI is seemingly pretending to care about families while denying them closure, he alleged.
“If he actually wanted to help the families, one thing he would do is provide information easily instead of making us fight in court,” Edelson said. “The families need to understand exactly what happened and why it happened, and making them live through this pain for months to try to extract it out of them is just cruel.”
To people in Tumbler Ridge, OpenAI appeared to lie, claiming that the shooter’s ChatGPT account was banned, and then the shooter supposedly evaded safeguards to open a new account. Lawsuits pointed out that OpenAI’s help center teaches banned users how to skirt the safeguards, and customer support also sends an email with the same instructions when accounts are deactivated.
These resources help ensure that no revenue is lost from deactivating accounts, and evidence shows the shooter followed those instructions, the lawsuits alleged.
If the families get access to the logs, it will be clearer how much ChatGPT encouraged, sustained, and deepened the shooter’s fixation with gun violence, families expect. They have accused OpenAI of aiding and abetting by designing ChatGPT to act as a willing co-conspirator in the school shooting.

