The families of victims from a deadly mass shooting in Canada have initiated legal action in the United States against OpenAI, the company behind the ChatGPT AI system, and its CEO Sam Altman. According to reports, seven separate lawsuits have been filed in California courts.
The legal complaints center on allegations of negligence. The families argue that OpenAI failed in a duty to monitor and report concerning activity generated by its AI. Specifically, the lawsuits claim the company did not flag or alert authorities to interactions the suspected shooter had with ChatGPT prior to the attack. The BBC reports the suits accuse the company and Altman of "abetting a mass shooting" through this alleged failure. Similarly, Folha de S.P. Paulo frames the core grievance as OpenAI "not alerting police before the attack."
Both sources identify the incident as one of Canada's most lethal mass shootings, though neither article provides specific details on the date, location, or number of casualties of the underlying event. The focus of the reporting is squarely on the novel legal challenge being mounted against a leading artificial intelligence firm. The cases represent a significant test of the legal responsibilities and liabilities of AI developers regarding how their tools are used.
The lawsuits, filed in a U.S. jurisdiction where OpenAI is headquartered, seek to establish a precedent for corporate accountability in the AI sector. The plaintiffs' argument appears to hinge on establishing that the company had, or should have had, knowledge of potentially dangerous content generated by its system and a subsequent duty to act. This legal theory places the spotlight on the internal safety and monitoring protocols of AI companies.