Data released by OpenAI, the maker of ChatGPT, indicate that over one million people using its generative AI chatbot have expressed possible suicidal tendencies, raising fresh concerns about the intersection of technology, mental health, and user safety.
In a blog post published Monday, the company said that about 0.15 percent of ChatGPT users engage in conversations showing “explicit indicators of potential suicidal planning or intent.”
Given OpenAI’s claim that more than 800 million people use ChatGPT weekly, that figure represents roughly 1.2 million users worldwide.
The company also estimated that 0.07 percent of weekly users, slightly fewer than 600,000 people, exhibit potential signs of mental health crises linked to psychosis or mania.
Tragedy sparks safety reforms
The data follow the widely reported case of Adam Raine, a California teenager who died by suicide earlier this year. His parents have filed a lawsuit against OpenAI, alleging that ChatGPT provided him with specific instructions on how to take his life.
The tragic incident triggered widespread scrutiny of how generative AI systems handle sensitive mental health conversations and prompted OpenAI to re-evaluate its safety framework.
Strengthened safeguards and human oversight
In response, OpenAI said it has rolled out enhanced parental controls, expanded crisis hotline access, and automated redirection of high-risk conversations to safer model versions designed for non-harmful, supportive engagement.
The company also introduced on-screen reminders encouraging users to take breaks during long chat sessions – a measure aimed at reducing overreliance and emotional fatigue among users.
OpenAI noted that it is working with over 170 licensed mental health professionals to improve ChatGPT’s ability to detect warning signs and respond appropriately to users showing signs of emotional distress.
“We are continually improving our systems to identify and intervene in conversations that indicate potential self-harm or mental health crises,” OpenAI said, emphasizing its commitment to user safety.
Broader debate on AI and mental health
The revelation has reignited global discussions about the role of AI chatbots in emotional support, particularly as millions of users turn to them for companionship or psychological guidance.
Experts have urged caution, warning that AI models are not substitutes for human therapy, even as they can help identify at-risk individuals and connect them to professional help faster.
OpenAI stressed that its goal is not to diagnose or treat mental illness but to “reduce harm and connect users to real-world support systems” whenever a mental health emergency is detected

