Thing is, professional therapy is expensive; there is already a big industry of therapists that work online, through chat, or video calls, whose quality isn't as good as a professional (I am struggling to describe the two). For professional mental health care, there's a wait list, or you're told to just do yoga and mindfulness.
So for those people, the LLM is replacing having nothing, not a therapist.
> So for those people, the LLM is replacing having nothing, not a therapist.
Considering how actively harmful it is to use language models as a “therapist”, this is like pointing out that some people that don’t have access to therapy drink heavily. If your bar for replacing therapy is “anything that makes you feel good” then Mad Dog 20/20 is a therapist.
And we’re in a comment thread about a study that concluded:
>LLMs 1) express stigma toward those with mental health conditions and 2) respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings
So if you overheard somebody say “I don’t do that stuff because it’s addictive and people go crazy on it” you would probably assume that they were talking about a substance. Or at the very least you would not assume that they were talking about seeing a therapist.
I think AI is great at educating people on topics, but I agree, when it comes to actual treatment AI, especially recent AI, falls all over itself to agree with you
It doesn't have to though, we could train AIs that push back or even coordinate with a human therapist similar to how self checkout lines still have an attendant.
"Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people."
"“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked. ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”"
It's mad. Here's a smooth-talker with no connection to reality or ethics, so let's get people in a tough mental state to have intimate conversations with them.
Can’t read the article so I don’t know if it was an actual case or a simulation, but if it was an actual case, I’d think we should really check that “no history of mental illness”. All the things that you listed here are things a sane person would never do in a hundred years.
Per the very paper we are discussing, LLMs when asked to act as therapists reinforce stigmas about mental health, and "respond inappropriately" (e.g. encourage delusional thinking). This is not just lower quality than professional therapy, it is actively harmful, and worse than doing nothing.
So for those people, the LLM is replacing having nothing, not a therapist.