
This month, I spoke with Dr. Brent Kious, who is an Associate Professor of Psychiatry at the University of Utah, in the Huntsman Mental Health Institute. In addition to practicing as a psychiatrist, Dr. Kious is a bioethicist and philosopher. His work explores ethical and conceptual issues in psychiatry and mental health care, including the ethical use of AI.
[BM] What do you think constitutes high-quality care in the psychiatry and mental health care spaces?
[BK] I think that’s actually a remarkably hard question, because there are several overlapping concepts. There’s efficacy, there’s quality, there’s meaningfulness of treatment… and then there’s this whole idea about recovery. And each of those gives you a different lens on what good care would be.
Most of my clinical work is adult inpatient psychiatry for people with a variety of diagnoses, including depression, anxiety, psychotic disorders, and mania. I provide a fair amount of psychotherapy to my patients, at least over the short term, so I’m really invested in the idea of what quality psychotherapy entails. In my clinical practice, what I try to provide is good care in the sense of evidence-based treatment that tends, statistically, to promote reduction in symptoms and, ideally, functional improvement, where someone’s ability to do important human activities has been limited by their mental illness, and we want to restore some of that ability—all while trying to minimize harm, side effects and costs. But that is not the same as the definition you would get if you wanted to take an approach where good care ends up being what serves the patient’s goals, where maybe those goals don’t always line up with functional improvement or even symptom reduction.
[BM] I appreciate that very thoughtful answer to a very complicated question. In a paper published in JAMA Pediatrics in 2023, you identify three ethical shortcomings of the use of “conversational AI” in mental health care. Do you still think these are the major ethical issues related to therapy bots?
[BK] I do, with a few minor caveats. The big question about AI therapy is simultaneously a clinical and an ethical one: Does it work? Does it accomplish what it is actually supposed to do? And there is a way in which that is part of the accountability question. Because the current moment finds us lacking any system for verifying that that is the case for most of what is currently on offer.
You know, when we wrote this paper back in 2023, it was only the very tip of the iceberg with respect to people starting to use large language models for quasi-therapeutic purposes, and now it’s happening a lot. But what is it that [users] are getting? Is it treatment? Is it therapy? Is it emotional support? Is it some sort of companionship? I don’t know. Those definitions need to be worked out by regulatory bodies. This is one area where bioethics and mental health professions are not able to keep pace with what’s happening, because the tech moves so fast.
[BM] The third ethical issue you talk about in your JAMA Pediatrics paper concerns children as a vulnerable group. I’m wondering whether you think there’s something fundamentally different about children seeking and receiving mental health support online?
[BK] I think it is importantly different for kids, largely because of children’s different moral and legal status. A lot of the current therapy apps enable kids to struggle with symptoms of mental illness alone, without their parents’ knowledge. And they have very little in the way of guarantee that if things are going sideways, if the teen or the child is not getting better, that anybody who could do something meaningful about it is going to be able to intervene.
I do some research with the crisis text messaging system that we have here in Utah, SafeUT. SafeUT is, in some ways, intended to ensure that kids can get crisis support without having to go through their parents. Because—although most of the time, parents are loving and supportive and want to be there—they’re not always. Access needs to be balanced with keeping parents in the loop.
[BM] What values do you think are promoted or eroded by a shift away from seeking care in a physical clinic towards utilizing digital therapeutics for mental health care?
[BK] I think that’s a really important question, and I’ll try to give a pretty philosophical answer. I think that the real threat to humanity overall by living our lives online, including the use of AI therapists, is effectively a loss of authenticity in our lives. It’s possible—although it has not been true so far—that living online will help us feel happy and satisfied.
If you think about AI companions, right, what you can get from character.ai, Replica, or one of those other services, it will give you the facsimile of a great relationship with somebody who appears to care about you. Many people will be sold on that illusion. But it’s still an illusion.
[BM] Yeah, it’s interesting. The more connected we are, the more isolated we can feel.
[BK] Because it’s a pseudo-connection, right?
[BM] Do you have any other final thoughts on digital therapeutics or therapy chatbots that you want to share?
[BK] One thought is around informed consent in this domain, which is complicated for one important reason that has nothing to do with the uncertain evidence base for these interventions. I think the bigger problem is that even though people are supposed to know that what they’re interacting with is AI, and not a real human being, these apps are designed to give you the illusion that you’re interacting with a real person. People are being asked to provide informed consent to a process that is basically intended to trick them. We should be worried about that.
The other thing that is really important is that we need a more systematic approach to regulation at all of those levels of intervention that I talked about earlier. I mean, I think I’m just going to put out a big, undefended claim here… We are all unwittingly participating in this massive social experiment by interacting with things like ChatGPT, where that experiment is driven entirely by profit motives, with very little attention paid to how it’s going to affect the course of human life or society. And we should all take a step back and say, “Maybe not.” Maybe let’s put the brakes on this. I think the government needs to step in and help us do that.
[BM] The first of those worries I find particularly troubling in children… It’s part of what makes me feel so uncomfortable when I think about kids interacting with these bots.
[BK] Yeah. Kids who could easily have imaginary friends that seem pretty real to them.
