World

Lawsuit blames ChatGPT maker OpenAI for bot helping plan a mass shooting


Drew Pusateri, a spokesperson for OpenAI, denied wrongdoing in ‘this terrible crime’
Associated Press
By Associated Press
2 Min Read May 11, 2026 | 13 hours ago
Go Ad-Free today

The widow of a man killed in a mass shooting last year at Florida State University is suing ChatGPT maker OpenAI, blaming the company’s artificial intelligence chatbot for contributing to the tragedy.

Prosecutors say they believe ChatGPT advised Phoenix Ikner on which location and time of day would allow for the most potential victims; what type of gun and ammunition to use; and whether a gun would be useful at short range.

“OpenAI knew this would happen. It’s happened before and it was only a matter of time before it happened again,” Vandana Joshi said in a statement Monday. Her husband Tiru Chabba was one of two people killed, and six more were wounded.

Drew Pusateri, a spokesperson for OpenAI, denied wrongdoing in “this terrible crime.”

“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Pusateri said in an email Monday to The Associated Press.

The lawsuit was filed Sunday in federal court.

Ikner faces two counts of first-degree murder and several counts of attempted murder in the shooting that terrorized the campus in Tallahassee, Florida’s capital, in April 2025. Prosecutors intend to seek the death penalty. Ikner has pleaded not guilty.

Separately, in April, Florida’s attorney general said there was a rare criminal investigation into ChatGPT over whether the app offered advice to Ikner.

Joshi said in a statement released by her lawyer that OpenAI “put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this.”

Several civil lawsuits have sought damages from AI and tech companies over the influence of chatbots and social media on loved ones’ mental health.

In March, a jury in Los Angeles found both Meta and YouTube liable for harms to children using their services. In New Mexico, a jury determined that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.

Share

Categories:

About the Writer

Push Notifications

Get news alerts first, right in your browser.

Enable Notifications

Content you may have missed

Enjoy TribLIVE, Uninterrupted.

Support our journalism and get an ad-free experience on all your devices.

  • TribLIVE AdFree Monthly

    • Unlimited ad-free articles
    • Pay just $4.99 for your first month
  • TribLIVE AdFree Annually BEST VALUE

    • Unlimited ad-free articles
    • Billed annually, $49.99 for the first year
    • Save 50% on your first year
Get Ad-Free Access Now View other subscription options