OPENAI
GPT-5 steps in when chats get serious
OpenAI is introducing new safeguards to ChatGPT after recent incidents where the chatbot failed to recognise signs of mental distress.
The updates include routing sensitive conversations to advanced reasoning models and adding parental controls, which will roll out in the next month.
The move follows two tragic cases: teenager Adam Raine, whose parents have filed a wrongful death lawsuit, and Stein-Erik Soelberg, who used ChatGPT to fuel harmful delusions before a reported murder-suicide.
OpenAI says a real-time router now directs sensitive chats to reasoning models like GPT-5-thinking, designed to “spend more time thinking” and handle complex conversations more carefully.
Parental controls are also coming soon, letting parents:
Link their accounts to their teenager’s
Enable “age-appropriate model behaviour rules” (on by default)
Switch off features like memory and chat history, which experts warn can reinforce harmful thought patterns
Safety mode, activated
Parents will also be notified if ChatGPT detects signs of “acute distress.”
These updates form part of a 120-day safety initiative, with OpenAI working alongside mental health professionals to improve safeguards and support.
This update screams, “We’ve learned our lesson.”