Is There Such a Thing as Ethical AI in Therapy? Here’s What Therapists Have to Say

Is AI in therapy helpful or harmful? Experts explore ethical use of AI and how clinicians can safeguard the therapeutic relationship.
Victoria Hudgeons
By
Updated July 29, 2025
Edited by
Key Takeaways
  • AI tools can streamline therapy services by aiding with diagnostics, treatment planning, and reducing administrative burdens, ultimately freeing up clinicians for more client-facing work.
  • While AI offers exciting advancements, prioritizing client safety, informed consent, and authentic therapeutic relationships remains paramount.
  • One of the most pressing concerns around AI use in therapy is data security, as vague policies and past breaches have undermined trust in digital technology.

Today’s therapists work at the intersection of mental health, business, and technology — three fields rapidly reshaped by artificial intelligence (AI). As AI becomes more embedded in our daily lives, therapists will face new questions regarding its use, impact on therapeutic relationships, and appropriate ethical safeguards.

Psychology.org asked a panel of four licensed therapists for their views on ethical AI integration in psychotherapy.

Meet Our Contributors

  • Jessica Gaddy, DSW-C, LCSW: Wellness educator and doctoral candidate dedicated to exploring the intersection of mental health and technology.
  • Kibby McMahon, Ph.D.: Licensed psychologist and co-founder/CEO of digital mental health company, KulaMind.
  • Reesa Morala, LMFT: Founder of Embrace Renewal Therapy & Wellness Collective and host of The Real Family Eats podcast.
  • Shari B. Kaplan, LCSW: Founder and clinical director of Cannectd Wellness and The Can’t Tell Foundation.

Is There Room for Ethical AI in Therapy?

According to a 2024 Pew Research Center survey, 30% of U.S. adults interact with AI at least several days per week. Meanwhile, regulators still debate whether government or tech companies are responsible for AI oversight. This often leaves therapists to decide whether — and how — to integrate AI into their practice.

One of the most promising applications of AI in therapy is in reducing administrative work and associated burnout. Gaddy highlighted “tremendous potential” for AI to advance precision diagnostics and personalized treatment planning, while Morala added that AI tools can offer clients a judgment-free space for thought processing between therapy sessions.

AI use in therapy also raises critical concerns regarding data privacy, overreliance on technology, and the introduction of biases or misinformation into care. Some therapists may have ethical reservations concerning AI’s significant energy demands, which can pose environmental justice challenges, including disparities that impact vulnerable areas and populations.

“Right now,” McMahon said, “AI is evolving so fast, it’s hard to keep up with all the possible ways it could put people at risk.”

Despite these risks, our therapist contributors agree there is room for some degree of ethical AI use in therapy practice, as long as professionals take significant care. “I believe ethical AI can exist in therapy, but only when integrated with clear boundaries, human oversight, and a deep understanding of its limitations,” Kaplan summarized.

How do therapists define “ethical AI”?

“Ethical AI refers to the responsible and consensual utilization of services, products, and/or tools that may enhance therapeutic or administrative services.”
– Jessica Gaddy, DSW-C, LCSW

“To me, ethical AI in therapy means using this emerging and rapidly evolving technology in a way that protects people’s dignity, freedom of choice, and emotional safety.”
– Kibby McMahon, Ph.D.

“Ethical AI in therapy is the utilization of artificial intelligence as a supplemental tool, mindful of abiding by laws of confidentiality and a client’s rights to protected health information.”
– Reesa Morala, LMFT

“[Ethical AI] involves the responsible use of AI tools that protect patient confidentiality, are free from bias, and serve to complement, not replace, the therapeutic relationship.”
– Shari B. Kaplan, LCSW

How Does AI Use Impact the Therapeutic Relationship?

A trusting rapport between the mental health professional and the client is critical for successful therapy. Integrating AI into therapy practice can either enhance or detract from this key relationship, depending on how it’s used: as a supplemental tool outside of session, an integrated tool used while in session, or as a replacement for professional therapy.

Using AI as a Supplemental Tool

When providers and clients are in alignment, AI tools can help clients practice skills and coping techniques. Supplemental use of AI outside of sessions may help make the therapeutic relationship more productive.

Morala discussed her personal experience with a former client who excelled at developing insights and problem-solving skills during sessions, but who struggled on their own. The client would sometimes feel “stuck” while using traditional tools like journaling.

“Since this client had been in treatment long enough…and I was confident they had the insight to respect the line of tool vs. replacement,” Morala said, “I felt comfortable introducing AI as a sounding board for externalizing the problem and getting clarity.”

Using AI During Therapy Sessions

AI products targeted at mental health providers can listen in on therapy sessions to produce detailed notes. If a client distrusts AI technology, a therapist’s request to use it could negatively impact trust. However, AI-assisted therapists who spend less time notetaking may build stronger client rapport.

Gaddy shared her personal experience evaluating an AI-based clinical documentation tool that promised to streamline progress notes: “On the surface, it sounded like a dream: less admin, more time with clients. But when I dug deeper into the privacy policy, I noticed vague language around data storage and third-party sharing.”

Gaddy ultimately decided not to use the tool. Her experience highlights the importance of weighing potential administrative relief against the possible erosion of a core element of the therapeutic relationship: client trust.

How should therapists discuss the use of AI with clients?
As mental health professionals, we have a responsibility to clearly explain how AI tools are being used: what they’re for, how they work, and what the potential benefits or risks might be. Informed consent is not just a checkbox; it’s how we protect trust, transparency, and the dignity of the people we serve.

Using AI in Place of Therapy

Large language model (LLM) chatbots are both easily accessible and low-cost, leading some individuals to turn to LLMs for primary mental health support. This can weaken professional therapeutic relationships, as prospective therapy clients become accustomed to relying on non-human help to solve their problems.

Although some users perceive AI as “more compassionate”, Stanford University researchers concluded that LLMs cannot safely replace therapists. Our contributors agreed.

“When LLMs go rogue, they could give users false information or even harmful advice,” McMahon said. She pointed to the National Eating Disorders Association’s AI chatbot, Tessa, which was shut down after it began offering dieting tips to users struggling with eating disorders. “Trained human providers would know to avoid giving advice like that, but an AI chatbot might respond without following clinical judgment.”

Gaddy summarized, “The therapeutic relationship is built on trust, empathy, and presence, key factors no algorithm can replicate.”

What Safeguards Can Therapists Use to Ensure Ethical AI Use?

The decision to use AI can raise ethical and legal concerns. Our therapist contributors recommend thoroughly researching AI tools, understanding AI’s limitations, prioritizing data privacy, remaining transparent, and following the latest professional guidelines.

Researching AI Tools Before Use

Careful research of AI tools can involve going beyond primary marketing materials, instead looking deeper into company policies on privacy and data.

“In my integrative practice, I’ve explored several tech tools,” Kaplan shared, “particularly those related to patient journaling and self-monitoring. I ultimately decided against adopting one that lacked HIPAA compliance and did not offer clarity on data handling. It reinforced the importance of vetting tools rigorously. As a clinician, I must protect my patients’ emotional safety and their digital footprint.”

Before using any AI product, therapists need to understand how it works, what information the tool gathers and retains, and whether and how this information is stored, shared, or used in AI training.

Gaddy encourages providers to read all terms and conditions and to test AI services independently first, preferably using faux data. “Familiarizing yourself with a product’s features and functions will help you educate your clients and empower them to make a fully informed decision about consent,” she said.

How can therapists decide whether AI is right for their practice?
I think it is paramount that before a therapist introduces AI tools in their practice, whether it is on the backend or in the therapy room, they need to ask themselves: Are you using it to support the good work you are already doing, or are you using it in place of the work that needs to actually be done?

Understanding AI’s Limitations

When evaluating AI tools for use in therapy, professionals must have a clear understanding of the technology’s limitations.

“Currently, AI lacks emotional intuition,” Kaplan said. “It cannot hold space for grief, discern nonverbal cues, or co-regulate with a client in distress. Relying too heavily on AI might risk undermining the therapeutic alliance, or worse, lead to misdiagnosis or misinformation.”

“Humans are unique and their problems diverse,” Morala agreed. “AI pulls from data that is already in existence, devoid of the nuances that can come from personal experiences such as socioeconomic status, past trauma, neurodivergence, cultural influences, to name a few.”

Prioritizing Safety and Data Privacy

When we interact with AI, our input is often used to further train and develop AI models. If this training data includes sensitive personal information, the risk is clear. Unfortunately, even services with robust privacy protections are not immune to data breaches.

“I believe the greatest potential risk associated with incorporating AI into therapy is data safety and privacy,” Gaddy said. “While disclosures and consents can outline how user data will be secured, we have witnessed numerous data breaches among previously trusted tech companies.”

Other safety risks of AI use in therapy include harmful hallucinations or bias, prompted by poor training data. “Therapists must advocate for diverse, inclusive data sets and seek tools that are regularly audited,” Kaplan insisted. “At a minimum, practitioners should educate themselves on the origins and architecture of any AI tool they consider.”

Ultimately, therapists must remember that AI companies are businesses under pressure to scale quickly, and even companies dedicated to ethical AI integration may prioritize business goals like user engagement and acquisition over care. “We can’t outsource ethical responsibility to the software developer,” McMahon warned.

Are therapists able to ensure that the AI tools they use are unbiased and do not perpetuate discrimination or harm?
No, we can’t ensure it fully, as AI models often have a problem of bias. However, we can ask better questions: Who trained the model? On what data? Was it tested across diverse populations? What happens when the tool is wrong? Bias is baked into many systems because our society is biased, so the burden is on us to interrogate these tools before we integrate them and to prioritize safety and transparency over convenience.

Providing Transparency and Informed Consent

Therapists must provide clients with clear, understandable information about AI integration. “Clients should know when AI is being used, what it’s doing, and how to opt out,” McMahon said.

Therapists should request client consent before using AI in therapy practice. “Providers should consider creating a written informed consent disclosure and keeping it in the patient’s electronic health record to document agreement to the use of AI in the therapeutic environment,” Gaddy suggested. Similarly, if a client opts out of AI use, therapists must be sure to document and respect their choice.

If a therapist decides to move to a different type of AI technology or service, they must update any AI disclosures and consent documents. Clients must be given the opportunity to re-review, ask questions, and opt in or out.

Adapting Industry Standards and Best Practices

Although mental health care is a highly regulated profession, regulations on AI use are lacking. Therapists should research the latest AI legislation in their state, seek guidance from professional organizations, and continue to advocate for the development of ethical AI standards.

The American Psychological Association has created its own ethical guidance documentation and advocated for federal action. Some states have taken legislative steps, like Utah’s H.B. 452 Artificial Intelligence Amendments, signed into law by Governor Cox in March 2025. H.B. 452 establishes user protections, mandates disclosures, and limits how mental health chatbots can use personal information.

However, these guidelines and rules are largely a patchwork, and there is a continued need to adapt and standardize AI best practices across the mental health profession. “Some organizations have started conversations, but we need more robust, unified guidance,” Kaplan said. “It’s time for our codes of ethics to evolve as quickly as technology is.”

What safeguards or regulations do you think should be in place to protect client privacy and confidentiality when using AI in therapy?
AI systems in therapy must comply with HIPAA and equivalent privacy laws. There should be clear policies for data encryption, user control over information, and opt-in consent. Regulatory bodies should begin to create guidance specific to AI in mental health care. Right now, it’s a bit of the Wild West.

Can Therapists Help Lead the Future of Ethical AI?

Our contributors highlighted a shared belief that therapists should be considered essential partners in the development of ethical AI tools, including the regulation that will govern them.

“I foresee AI completely revolutionizing the mental health and wellness industry over the next few years,” Gaddy said, “By co-designing tools and services, we can boost access, build culturally affirming interventions, and develop treatment modalities that truly enhance, expedite, and improve mental health care for everyone.”

Meet Our Contributor
Jessica Gaddy's profile image
Jessica GaddyDSW-C, LCSW

Jessica Gaddy is a licensed clinical social worker, wellness educator, and doctoral candidate dedicated to exploring the intersection of mental health and technology. She is committed to advancing the field of mental health by leveraging innovative mobile health (mHealth) solutions and artificial intelligence to create culturally affirming tools that expand access to care and reduce barriers to treatment.

Meet Our Contributor
Shari B. Kaplan's profile image
Shari B. KaplanLCSW

Shari B. Kaplan, LCSW, is a seasoned psychotherapist, trauma expert, and integrative mental health innovator with over two decades of clinical experience. As the Founder and Clinical Director of Cannectd Wellness and The Can’t Tell Foundation, she has built a unique, multidisciplinary model that bridges traditional psychotherapy with holistic modalities to support mental, emotional, and spiritual well-being. Shari’s approach blends evidence-based practices with functional spirituality, emphasizing the repair, release, and reorganization of internal systems to support long-term healing. A vocal advocate for ethical technology integration in mental health, Shari regularly consults on how AI can be used in a supportive, not substitutive, way in clinical practice. She is deeply committed to patient-centered care, always prioritizing safety, privacy, and the therapeutic alliance.

Meet Our Contributor
Kibby McMahon's profile image
Kibby McMahonPh.D.

Kibby McMahon, Ph.D., is a licensed clinical psychologist and the co-founder and CEO of KulaMind, a digital mental health company building AI-powered tools to support loved ones of people with mental illness. With over 20 years of experience in psychological research and clinical practice, Dr. McMahon has worked at institutions including Columbia University, Weill Cornell, and Duke University, where she earned her Ph.D. and led research funded by the National Institute of Mental Health. Her expertise lies at the intersection of evidence-based psychotherapy and scalable digital interventions, including CBT, DBT, and AI-enhanced coaching. She also serves as co-host of the podcast A Little Help for Our Friends, which explores the ripple effects of mental illness on families and romantic relationships. As a thought leader in the emerging field of AI in mental health, Dr. McMahon is passionate about building responsible, relationally attuned AI that complements, not replaces, human connection.

Meet Our Contributor
Reesa Morala's profile image
Reesa MoralaLMFT

Reesa Morala is a licensed marriage and family therapist who helps parents heal their relationships and create thriving family systems. As the founder of Embrace Renewal Therapy & Wellness Collective and host of The Real Family Eats podcast, Reesa empowers couples to move past ugly fights, disconnection, and burnout by teaching them to prioritize their partnership and personal wellness. A first-generation Filipino American and mom herself, Reesa blends research-based therapy with real-life understanding, humor, and cultural humility. Her work breaks down the “perfect parent” myth and offers honest, practical support for couples raising kids in today’s overwhelming world. Whether through therapy, workshops, or storytelling, Reesa is passionate about helping parents feel seen, supported, and reconnected not just as caregivers but as partners and individuals. Her mission is clear: strong relationships create strong families, and parents deserve support too.