Should pediatricians recommend therapy chatbots to patients?

admin
11 Min Read

Sara Kellner; Bryanna Moore, PhD , 2025-05-07 15:02:00

Key takeaways:

  • Developers have created AI-based apps to treat adult and pediatric mental illness.
  • A lot of research is still needed about use, outcomes and privacy.

The use of AI in medicine is expanding to aid providers and patients with a variety of tasks, and one area that is growing rapidly is applications for treating mental illness.

“I feel like I learn about new platforms or new apps every week,” Bryanna Moore, PhD, a clinical ethicist and assistant professor in the department of health humanities and bioethics at the University of Rochester in New York, told Healio.



IDC0325Moore_graphic



Most of these apps use large language models to respond to users in a human-like way, and many have been designed to utilize cognitive behavioral therapy-based tools, Moore said. Although some research has assessed the benefits and risks of using chatbots for mental health treatment, few studies have focused on pediatric populations.

“They have been developed in the adult setting, and are starting to be picked up and expanded into the pediatric realm,” but there is still so much to learn about them, Moore said.

We talked to Moore about how therapy chatbots work and what pediatricians should know before recommending them to patients.

Healio: How do therapy chatbots work? What are they designed to do?

Moore: They range from really simplistic bots — where each conversation is new, and it does not remember you over time — to more sophisticated apps that have a memory of you as a person and adapt their approach based on what they learn from you in particular.

What most of them have in common is that they rely on large language models — an algorithm that scoops up natural language and uses that to make predictions about a string of text and respond in a way that emulates a human therapist. You open the app, you share how you are feeling and it writes back in a way that either validates those feelings, helps you reframe a problem or uses other tools from cognitive behavioral therapy to help you move through whatever it is you are experiencing.

Healio: Are there special considerations that need to be made when designing therapy bots for kids?

Moore: I work in both the pediatric and adult settings as a clinical ethicist, and working with kids is just different. They are a vulnerable population who are typically dependent on adults and/or society for providing basic goods that they need to survive and flourish. Children typically do not make decisions about their health and health care in the same way that a fully competent, capacitated adult does.

A really important consideration when thinking about, developing and using these apps with children is that their minds are still developing. They are still learning about themselves, their identity and coping. Childhood is such a broad umbrella, and kids are at all different stages of cognitive and social development. Having apps that are sensitive to that and responsive to different social and developmental needs and speeds is really important.

Healio: Are there situations where these apps could be helpful?

Moore: These technologies can be most helpful when there is a concrete skill that kids might need to practice, including social skills, coping skills or mindfulness. Most of the anecdotal evidence says they are most helpful when a person is having increased anxiety. They jump on and follow a guided exercise that helps them feel validated or has them practice their breathing. There is a sense of anonymity that can also be helpful particularly for adolescents or even younger children who are struggling with something but are not ready to talk to someone about it yet.

Healio: What are the downsides of using these apps?

Moore: They should not be a replacement for other forms of mental health support. This is the situation I worry about. There have been some reports, mostly in adult populations, of what happens when these bots go wrong — when there are hallucinations in the AI, or someone develops a strong overreliance on the technology that negatively impacts other aspects of their life.

If a child is experiencing an acute mental health crisis, it is absolutely not the time to be relying on apps, many of which do not have any safety features built into them. Most app developers and platforms are very clear that they are not intended to be used in those sorts of situations.

Healio: How should pediatric providers talk to patients and families about therapy bots?

Moore: My honest answer to this question is that I do not think we know enough about the evidence base — the benefits and harms — to have good conversations and good informed consent processes for these technologies. There is an emerging evidence base, but the data are still really mixed. One of the big things that we can do is be really honest with patients and families about what we know and what we don’t know.

Importantly, pediatric providers should speak with patients and parents about these apps in a way where they are framed as a supplement — not a replacement — for children seeking help and receiving support from others in their life.

Healio: Are there any concerns about privacy or confidentiality?

Moore: AI is the Wild West at the moment. We do not have regulations in place to ensure privacy, to clarify who is responsible for the data or how they will be stored. Some platforms and developers do a much better job of being clear about how they are trying to mitigate that risk. But for others, there is not a lot of information available. So yes, it absolutely is a concern.

There is some of discussion of digital phenotyping in the existing literature and how the sensitive information that people might share if they are using these apps [will be used]. What if that is going to be sold or used for surveillance or other purposes, outside of the stated goals of the therapy bot? That is a huge ethical concern — one that we could potentially mitigate. But we just need more information, more transparency and clearer regulations around it.

Healio: Should pediatricians recommend therapy bots?

Moore: My honest answer for that is also I do not know. Apps like Alongside and Troodi are great examples. They have built-in safety features and are developed in a way that is very sensitive to the unique needs and interests of children and adolescents. I think there are some apps that pediatricians might consider using and recommending to children and families, but it needs to be on a case-by-case basis. It should not be, “Here is your app, off you go,” but more, “Let’s try this, see if it helps you with symptom management and other therapeutic goals, then come back together and discuss.”

There is absolutely a place to potentially recommend them, but we still need so much more research. We need to talk to children, adolescents and parents about what pressures they feel, how they think about using the apps and what would be most beneficial.

Healio: Anything else?

Moore: It is really important for us to talk about health disparities in this space. I think we need to develop [these apps] and recommend and utilize them with a real sensitivity to existing health disparities when it comes to mental health and their potential impact on those disparities. Maybe these apps could close the gap, but they could also potentially exacerbate it.

One of the worst things we could do is push people toward using the apps because they are a relatively low-wait-time, low-cost option without having a much stronger evidence base for both how that will impact individuals, but also how it will impact communities who are most likely to be pushed toward or away from using the apps.

For more information:

Bryanna Moore, PhD, can be reached at bryanna_moore@urmc.rochester.edu or on Bluesky at @bryannamore.bsky.social.


Source link

Share This Article
error: Content is protected !!