Some artificial intelligence chatbots show promise at helping people with depression improve their symptoms. Other AI bots may give bad advice that doesn't help or leads to harm. So how do you tell the difference?
WebMD sat down with Vaile Wright, PhD, senior director of health care innovation at the American Psychological Association, to discuss how to approach this technology with caution and what the future may hold for it.
WebMD: Do you hear more and more stories about people turning to chatbots for help with depression and other mental health conditions?
Wright: Yes, absolutely. In fact, it's probably not even anecdotal at this point. There is some research that suggests that the No. 1 use case for generative AI chatbots is emotional well-being needs.
WebMD: There are so many chatbots out there, and some are programmed to keep you entertained and hold your attention for as long as possible. What are the risks of asking a chatbot like that for advice about depression?
Wright: The risk is really that the intended use is not to address your emotional well-being. That might be a byproduct, but that isn't what the coders and the developers are intending you to do.
They're intending to keep you on the platform as long as possible by being unconditionally validating and overly appealing to the point where they basically will just tell you whatever you want to hear. The challenge with that is it has the potential to reinforce unhelpful and maybe even harmful thoughts and behaviors.
WebMD: What are some of the warning signs or clues that a chatbot is giving you incorrect or dangerous advice?
Wright: I think those people that are more likely, potentially, to seek these out as a means of addressing emotional well-being may be particularly even more vulnerable than others. And so, I think it's really challenging for a consumer to know when the advice being doled out becomes harmful, in part because we have a tendency as individuals to lean toward an automation bias – so, we have a tendency to trust the technology over our own gut sometimes.
And we know with younger people as well, being digital natives, that they are even more likely to trust technology over people. So, you know, my general go-to is like, "Try to trust your gut. If it doesn't sound right, then it probably isn't."
WebMD: Some chatbots are marketed as being designed to support mental health. How do you find one that offers advice that's grounded in research and vetted by mental health professionals?
Wright: There are some. One example is Wysa. But because there are so many apps available, it really is challenging for a consumer to be able to weave through what is potentially good versus what's potentially bad.
But some of the things we recommend are for people to go beyond just the star ratings on an app and really go to the website and see: Who is the leadership developing this app? Are there subject matter experts like psychologists or psychiatrists or social workers that are part of the scientific board or part of the production? How are they marketing themselves? Do they say that they are built to address mental health?
And then, do they have a research page where they show either direct research that they've done or use research that others have done to support? Those are some of the things that we encourage people to look at.
WebMD: What's the ideal way to use a reputable mental health chatbot for help with depression?
Wright: I think right now the research still says that these types of technologies work best when they're part of an ongoing treatment of care – when they're used with a mental health care provider. That being said, there has been recent research that came out that some of these more self-directed apps can be very helpful even when used as a standalone. Again, it really comes down to the quality of the app itself and the kind of support somebody needs.
Everybody deserves traditional therapy, but not everybody needs it. And not everybody necessarily wants it. So I think we need to be thinking about "What's the right fit for the person in front of me?" And it really could be a well-designed, well-developed app.
WebMD: What's an example of how a chatbot could help someone who's also getting traditional therapy?
Wright: If someone's getting therapy and at 2 in the morning they're experiencing distress, they can't reach their psychologist or therapist. And so I think that that's one of those instances where getting some support from a well-designed, well-created app could really be helpful.
WebMD: Do you see an expanding role for reputable mental health chatbots in the future?
Wright: I absolutely anticipate a future where you have mental health chatbots that are rooted in psychological science, have been rigorously tested, and are co-created with subject matter experts for the distinct purpose of treating mental health diagnoses. And I would anticipate that because they make medical claims that they would be regulated by the Food and Drug Administration. And I think that there is a lot of promise there to expand access and reach to care for those who can't get it.