AI Therapists – How Chatbots and Mental Health Interact
At the end of the day, we are all human. And humans need support. When life gets overwhelming, as it so often does, we instinctively turn to some form of support – whether it’s phoning a friend or even seeking therapy. But for many, the idea of therapy can be incredibly daunting. Negative past experiences, financial burdens, and social stigmas can all prevent someone from pursuing well-needed mental health support.
Recently, an alarming number of people have instead turned to artificial intelligence as therapy, as chatbots have now been advertising “therapeutic services” across multiple platforms.
What are “AI therapists”? Why do or don’t they work?
Many platforms like character.ai have started peddling artificial “therapist” chatbots designed to listen to users and their mental health concerns. But this poses a dangerous question: to what extent can AI help with mental health? While chatbots are equipped with a mountain of knowledge in a vast array of fields, it carries an equal amount of misinformation.
So, given the ability to treat clients with very real and vulnerable mental health concerns, it can very quickly become a public safety issue – as seen in many cases of psychosis and suicide linked to AI. While AI therapy may seem like a great idea – curbing the actual investment of seeking therapy – it lacks a number of essential components needed for successful mental health support.
AI therapy vs. human therapy – what’s the difference?
First, AI lacks the human component. AI can offer a mountain of reassurances to users, but as a chatbot, its main goal isn’t to help you overcome your concerns – it’s to keep you engaged. Chatbots are inherently designed to tell you what you’re most likely to want to hear in order to keep you a loyal user of the platform. Unlike a human therapist, it isn’t able to ethically reflect on its own actions and how these actions affect you, the user – as seen in AI-induced psychosis.
Second, it doesn’t protect your privacy like a real therapist does. Famous platforms like ChatGPT have often been the subject of controversy surrounding breaches in confidentiality and privacy of chats between users and AI. Meanwhile, a real therapist is bound by law to maintain and protect your privacy. Finally, it isn’t accurately informed. As previously mentioned, AI models are trained on an enormous database, using information sourced from all corners of the internet. But the internet is far from immune to misinformation, and this creates chatbots that can provide advice that is flat-out untrue and even harmful.
Clinical psychologists, on the other hand, are trained in utilizing evidence-backed approaches to treating mental health. After all, if a chatbot provides harmful advice, it can’t be held liable the same way a person would.
So what’s the consensus on AI’s role in therapy?
Right now, AI chatbots are in the early stages of development, even in technological fields. So when it comes to mental health, AI is not an ideal resource, nor is it recommended. In fact, many cases of suicide, psychosis, and other crises have been linked to a highly dysfunctional personal relationship between AI and vulnerable individuals. If you are struggling with mental health concerns, it’s important to understand the risks associated with AI use.
Author: Ryan Ghodse-Elahi
Reviewed by Dr. Sarah Haller, C. Psych, Clinical Psychologist