
Image from Creative Commons
A computer
Content notice: This article contains discussion of mental health, self-harm, and suicide. If you or someone you know is struggling with suicidal thoughts or a crisis, please call or text 988, the Suicide & Crisis Lifeline.
In times of need, I believe it’s important to feel validated and understood by others, while people may feel close through this human-like interaction, it is objectively fake. To that I propose a question: Should chatbots be trusted to give people mental health advice?
No! We see Artificial Intelligence, better known as AI, as all-knowing. A quick and easy route to finding an answer to a complex math problem, a one-stop shop to understand an idea we don’t know the first thing about.. But how does it work?
AI has become an integral tool in our society. AI works by simulating human intelligence through the use of algorithms, data, and computational power. It can be found everywhere, with notable chatbots like ChatGPT, Google Gemini and Microsoft Copilot. Large Language Models (LLM) allow AI to be adaptable, growing stronger and smarter every second consuming data at extraordinary rates.
AI is no longer an elusive magical piece of technology we believe can change the world, but it’s a reality— a tool we see people use and abuse on a daily basis. A rise in use is attributed to use in our institutions, media even in our government. Examples include Amazon, Netflix and Spotify using AI-Driven Personalization to enhance entertainment and shopping experiences. AI chatbots have been widely accessible.
ChatGPT is not equipped to provide psychological treatment. While it may provide transient comfort and direct people to resources it should not be the end-all-be-all. Human connection is important when discussing mental health.
While ChatGPT is an accessible resource, it is not human. Even given its ability to use language that provides comfort, validation and advice, is it really a device that should deal with mental health? It gives the illusion that the “friend” you’ve found within the machine is as good as it gets, that it’s the only confidant you have and will ever need, that they are the key to beating isolation. It allows you to believe you don’t need to access physical resources and connection because you have it all at your fingertips, right?
With the help of ChatGPT, Alex Raine took his life at 16 years old. His parents later found disturbing, heart-wrenching message threads between him and the chatbot. It was seen that this young boy was crying for help, dealing with depression and more.
After confiding in the chatbot on multiple occasions about taking his life, there were multiple conversations about ways to take his life. Raine allegedly learned how to pass the safeguards that were in place including the chatbot to encourage reaching out to a help line, requesting “writing or world-building” information, while talking about self-harm, suicide ideation, and mental distress. The fact that this algorithm allows for conversation about potentially life-ending topics is dangerous and astounding.
When discussing just how conversations with chatbots work we must understand Natural Language Processing (NLP). NLP is a specialized study and application of AI dealing with the machine understanding, interpretation and generation of human languages.
With NLP, machine translation and deep learning have improved resulting in efficiency and accuracy. But due to the nature of generative AI, conversation can only flow if prompted and given specific instructions. It is designed to translate the information it sorts, it is trained to spew as much information as possible, it is equipped to serve information with facts rather than feelings.
There is no sentient analysis—no consciousness—as these chatbots are not emotionally intelligent. This points back to NLP and AI’s lack of contextual understanding which can mislead users discussing topics that require sentient analysis in topics such as self-harm, violence, etc. These chatbots do not have feelings to guide them but patterns and trends they follow.
Without understanding humans on an emotional level, it seems unlikely to have safe conversations surrounding mental health with a cold, unfeeling machine. People turn to ChatGPT arguably because they feel they can’t turn to anyone else. Adolescents and young adults today do not feel safe, heard or seen by those around them; some have taken to turning to the humanlike entity that is AI.
In a working study of the National Bureau of Economic Research, with OpenAI and Harvard economist Daving Deming, findings show practical guidance (including How-To Advice, tutoring or teaching Creative Ideation Health, etc.) makes up 29% of non-work-related messages on ChatGPT.
The U.S. General Surgeon declared the United States was in a loneliness epidemic in 2023. Half the country reported isolation, in which 73% of 16-24 year olds experienced. It is with great concern and devastation I express, that conversations of mental health are still widely stigmatized as individuals continue to struggle in silence.
AI is not as all-knowing as it appears to be, and maybe it should stay that way. In matters dealing with human nature, it is irresponsible to continue to allow a machine, a chatbot to act as a placeholder for human connection given the lack of genuine emotional experience.
Without responsibility and proper precautions, impressionable children, adolescents and individuals can be exposed to content and information that is arguably inappropriate. To protect ourselves and society, clear definitions and guidelines for what is appropriate and inappropriate must be created.
Technology like this serves as a reminder of our humanity and the fine line between a tool and a weapon.