A 13-year-old student from Deland, Florida, was arrested after asking ChatGPT how to “kill my friend during class,” according to local reports. The incident, which took place on September 27, 2025, has reignited debates around AI safety and digital responsibility in schools.
The alarming query was flagged by Gaggle, a system that monitors students’ online activity for potential threats. The alert was automatically sent to the school’s digital safety officer, who immediately contacted local authorities.
Police questioned the student, who claimed he was only “trolling” a classmate after an argument. However, officials emphasized that such statements are treated as threats, regardless of intent.
“Another so-called ‘joke’ that could have led to a school emergency,” said the Volusia County Sheriff’s Office in a statement.
School administrators urged parents to talk to their children about digital awareness and accountability when using AI tools.
“Even if a threat isn’t serious, it creates fear and disrupts the safety of the entire learning community,” the school’s spokesperson noted.
The case has sparked renewed discussion on AI oversight in education, with OpenAI introducing new parental control features in ChatGPT to limit content access and enhance privacy for minors.
Authorities and educators in Volusia County are now calling for greater parental involvement in monitoring students’ online habits and promoting safe interaction with digital platforms.

