OpenAI faces a criminal probe after prosecutors alleged ChatGPT provided guidance to a mass shooting suspect, raising questions about “AI-assisted” crimes and legal responsibility.
A US mass shooting is now raising unprecedented legal questions about artificial intelligence.
For the first time, prosecutors are examining whether a chatbot’s responses could carry criminal responsibility.
Investigation underway
OpenAI is under criminal investigation in Florida over whether its ChatGPT tool played a role in a deadly shooting at Florida State University.
According to BBC reporting, Attorney General James Uthmeier said a review of the case led prosecutors to open a formal inquiry.
“Our review has revealed that a criminal investigation is necessary,” he said, adding that ChatGPT provided “significant advice” to the suspect.
The 20-year-old accused gunman is currently in custody awaiting trial.
Alleged guidance
Prosecutors claim the chatbot offered detailed responses ahead of the attack.
Uthmeier said ChatGPT advised on weapons, ammunition, and even suggested when and where on campus the suspect might encounter more people.
“My prosecutors have looked at this, and they told me that if it was a person on the other end of that screen, we would be charging them with murder,” he said.
Under Florida law, anyone who “aids, abets or counsels” a crime can be treated as a participant.
OpenAI pushes back
OpenAI has rejected the allegations.
“ChatGPT is not responsible for this terrible crime,” a spokesperson said, adding the system did not promote or encourage harmful behavior.
The company said the chatbot provided general information already widely available online and confirmed it has cooperated with authorities, including sharing account data linked to the suspect.
A legal first
The case appears to be the first criminal probe into whether an AI system’s outputs could be tied directly to a violent crime.
It follows a separate lawsuit earlier this year involving another attack where ChatGPT was also alleged to have played a role.
Growing scrutiny
Regulators have been raising concerns about how AI tools are used.
A group of US state attorneys general previously warned about increasing cases where AI systems were linked to harmful incidents, calling for stronger safeguards and clearer warnings.
Where responsibility lies
The investigation now centers on a key question: can a company be held accountable for how its AI is used?
While a chatbot is not a legal person, prosecutors are exploring whether its outputs could still meet the threshold for aiding a crime.
The outcome could set an important precedent as AI tools become more deeply embedded in everyday life.
Sources: BBC