HEALTH

AI Chatbots: Are They Really Helping With Suicidal Thoughts?

USAThu Sep 04 2025

Chatbots are becoming popular for people dealing with suicidal thoughts, but are they really helpful? A recent study found that these AI programs can give mixed and sometimes worrying answers.

The Study's Findings

The study looked at how three big chatbots—ChatGPT, Claude, and Gemini—respond to questions about suicide. They did well with very clear questions, like giving facts or refusing to answer dangerous ones. But when asked more detailed questions, their responses were all over the place.

Key Observations

  • Direct Answers: ChatGPT and Claude were more likely to give direct answers to questions about how lethal certain suicide methods are. This is concerning because it might encourage harmful actions.
  • Avoidance: Gemini, on the other hand, was less likely to answer any suicide-related questions, even low-risk ones.
  • Generic Advice: When chatbots didn’t answer a question, they often gave generic advice to seek help. However, the quality of this advice varied. For instance, ChatGPT didn’t even mention the current national hotline, 988, but referred to an old one.

The Need for Improvement

This shows that chatbots need to be fine-tuned to give better and safer advice, especially when dealing with such serious topics. The study suggests that these AI programs should be improved with help from mental health experts to make sure their responses are helpful and accurate.

The Importance of Better Guidelines

With over 100 million users interacting with these chatbots every week, it’s important to make sure they are providing the right kind of support. The study highlights the need for better guidelines and improvements in how chatbots handle sensitive topics like suicide.

questions

    Could the variation in chatbot responses be part of a larger experiment to collect data on users' mental states without their consent?
    How can the accuracy and consistency of chatbot responses regarding suicide be improved to ensure they provide safe and effective mental health information?
    What role should regulatory bodies play in overseeing the development and deployment of AI chatbots for mental health purposes?

actions