Can You Use ChatGPT or AI for Therapy Notes? What’s Ethical and What’s Not
Is it safe or legal to use ChatGPT for clinical notes? This guide breaks down AI therapy notes ethics, HIPAA compliance, and the key differences between public AI and purpose-built tools like ClinikEHR.
By Dr. Jethro Magaji
Duration
14 MINSQuick Answer: Can You Use ChatGPT for Therapy Notes?
No. Using ChatGPT or other public AI tools for therapy notes containing Protected Health Information (PHI) is illegal and unethical.
Here's why in plain terms:
❌ HIPAA Violation: ChatGPT won't sign a Business Associate Agreement (BAA), which is legally required
❌ Data Training Risk: Your client data may be used to train AI models
❌ No PHI Protection: No safeguards to protect sensitive health information
❌ Severe Penalties: Fines up to $1.5 million/year, loss of licensure, lawsuits
❌ Ethical Breach: Violates confidentiality obligations to clients
The Legal Alternative: Use HIPAA-compliant, purpose-built clinical AI like ClinikEHR's Clinical Note AI that provides BAAs, encrypts data, and never uses it for training.
Use AI Safely and Legally
Try ClinikEHR's HIPAA-compliant Clinical Note AI. Generate professional SOAP, DAP, and BIRP notes without legal risk. Free to try, no signup required.
Try Free AI Note GeneratorThe Complete Guide: Why This Matters for Your Practice
It’s the question every tech-savvy therapist in the United States, UK, Canada, Australia, and Africa is asking: “Can I just use ChatGPT to write my therapy notes?” The temptation is understandable. Public AI tools are powerful, accessible, and seem like a quick fix for the endless burden of documentation. But the answer is far more complex than a simple yes or no, and getting it wrong can have serious consequences for your practice and your clients.
Using the wrong AI tool can expose you to significant legal and ethical risks. However, using the right AI tool can be transformative. This guide will clarify the misconceptions, explain what HIPAA really says, and show you how to use AI for documentation ethically and safely, with purpose-built solutions like ClinikEHR’s Clinical Note AI.
Why This Question Keeps Coming Up: The Allure of Easy Documentation
The rise of generative AI has been meteoric. Tools like ChatGPT, Gemini, and Claude are now part of the public consciousness. For clinicians drowning in paperwork, the appeal is obvious: a technology that can summarize text and generate human-like prose seems like the perfect antidote to documentation fatigue. This curiosity, however, often outpaces a clear understanding of the risks involved. The convenience of a public AI tool can easily overshadow the fundamental requirements of handling Protected Health Information (PHI).
What HIPAA Actually Says About AI Tools
HIPAA doesn’t mention “ChatGPT” by name, but its principles are timeless and apply to any technology that handles PHI. The two most important concepts for a therapist to understand are:
- Business Associate Agreements (BAA): Under HIPAA, any third-party vendor that creates, receives, maintains, or transmits PHI on your behalf is a “Business Associate.” You are legally required to have a signed BAA with them. This agreement contractually obligates the vendor to protect PHI according to HIPAA standards. Public AI tools like the free version of ChatGPT do not and will not sign a BAA. Using them with PHI is a direct violation.
- Minimum Necessary Standard: This principle dictates that you should only use, disclose, or request the minimum amount of PHI necessary to accomplish a specific task. Pasting an entire session transcript into a public tool goes far beyond this standard.
Using a non-compliant tool for any task involving PHI is a direct violation of HIPAA in the US, and similar principles apply under GDPR (UK/EU), PIPEDA (Canada), and other data protection laws in Australia and Africa. The penalties can include severe fines, licensure issues, and irreparable damage to your professional reputation.
ChatGPT vs. Purpose-Built Clinical AI: A Critical Comparison
The difference between a public AI model and a HIPAA-compliant clinical AI tool is not the underlying technology—it's the entire security and privacy infrastructure built around it.
| Feature | ChatGPT (Free Public Version) | ClinikEHR’s Clinical Note AI |
|---|---|---|
| HIPAA Compliance | No. Does not sign a BAA. | Yes. A BAA is provided to all users. |
| Data Privacy | Conversations may be used for model training. | Your data is private and is never used for training. |
| PHI Protection | No specific safeguards to identify or protect PHI. | Platform is designed to secure PHI with end-to-end encryption. |
| Clinical Accuracy | Not trained on clinical language or note formats. | Fine-tuned on clinical terminology and SOAP/DAP/BIP formats. |
| Workflow Integration | Requires manual copy-pasting, creating risk at each step. | Fully integrated into your EHR workflow, ensuring a secure chain of custody. |
The Bottom Line: Pasting PHI into ChatGPT is the digital equivalent of discussing a client’s case in a crowded coffee shop. It’s a breach of confidentiality, plain and simple.
Ethical Use of AI in Clinical Documentation: A Framework
Beyond the law, our ethical codes demand a thoughtful approach to technology. The core principles of confidentiality, informed consent, and clinician accountability are non-negotiable.
- Confidentiality: Your primary ethical duty is to protect your client’s privacy. This means using only secure, HIPAA-compliant tools for any task involving PHI. There is no ethical gray area here.
- Informed Consent: While you may not need to get consent for every internal software you use, transparency is key. It's good practice to inform clients in your intake paperwork that you use secure, HIPAA-compliant software systems to manage their records, which may include AI-powered tools to assist with administrative tasks.
- Clinician Accountability: An AI is a tool, not a colleague. You, the clinician, are 100% responsible for the accuracy and integrity of the final clinical note. The AI generates a draft; you provide the expert review, clinical nuance, and final signature. Never trust an AI-generated note blindly.
Examples of Ethical vs. Risky Use Cases
Risky & Unethical Use Case: A therapist in the UK records a session on their phone, uses a free online service to get a transcript, and pastes the entire transcript into ChatGPT with the prompt, “Turn this into a SOAP note.”
- Why it’s wrong: PHI was shared with two non-compliant vendors. No BAA is in place, violating both HIPAA (if serving US clients) and GDPR principles. The data may be stored indefinitely and used for training.
Safe & Ethical Use Case: A therapist in Australia uses the integrated ClinikEHR mobile app. After a session, they dictate a 60-second summary of their thoughts into the app.
- Why it’s right: The audio is processed within ClinikEHR’s secure, compliant environment. The data is encrypted, and a BAA (or equivalent data processing agreement) is in place. The AI generates a structured SOAP note draft, which the therapist then reviews, edits for nuance, and signs—all within the same secure platform.
Product Insight: ClinikEHR’s Commitment to Ethical AI
At ClinikEHR, we believe AI should empower clinicians, not endanger them. Our Clinical Note AI was built with a security-first mindset:
- HIPAA and Global Compliance: We provide a BAA and adhere to the highest standards of data protection, including GDPR, PIPEDA, and more.
- Zero-Knowledge Architecture: Your data is encrypted in a way that even we cannot access it.
- No Data for Training: We will never use your or your clients’ data to train our AI models.
- Integrated and Secure: By keeping the entire workflow within one platform, we eliminate the risks associated with copy-pasting between different applications.
Conclusion: AI Isn’t the Problem — Using It Wrong Is
Artificial intelligence is not inherently risky for healthcare; in fact, it holds immense promise to reduce burnout and improve care. The risk comes from applying consumer-grade tools to professional, high-stakes tasks. Using ChatGPT for therapy notes is a dangerous shortcut that compromises client confidentiality and violates the law.
The ethical path forward is to embrace purpose-built, HIPAA-compliant AI solutions that are designed to protect PHI while delivering the time-saving benefits you need. By choosing the right tools, you can uphold your ethical duties, protect your practice, and reclaim your time.
Frequently Asked Questions (FAQs)
1. Is it ever okay to use ChatGPT for non-PHI tasks in my practice? Yes. Using ChatGPT for brainstorming blog post ideas, drafting marketing copy, or creating generic client handouts is generally acceptable, as long as no PHI is entered.
2. What are the specific penalties for a HIPAA violation involving AI? Penalties can range from thousands to millions of dollars, depending on the level of negligence. It can also lead to corrective action plans, loss of licensure, and civil lawsuits.
3. Does using a VPN make it safe to use ChatGPT with PHI? No. A VPN encrypts your internet connection but does nothing to change the fact that the service you are sending PHI to (ChatGPT) is not HIPAA-compliant and will not sign a BAA.
4. Can I de-identify notes before pasting them into ChatGPT? While theoretically possible, true de-identification is extremely difficult and prone to error. A client's story, location, or unique circumstances can be identifying. It is not a recommended or reliable workflow.
5. How do I know if an AI tool is truly HIPAA-compliant? Ask them directly: "Will you sign a Business Associate Agreement (BAA)?" If the answer is no, or they don't know what that is, the tool is not compliant and should not be used with PHI.
6. What about ChatGPT Plus or ChatGPT Enterprise? ChatGPT Plus ($20/month) still does not sign BAAs for individual users. ChatGPT Enterprise may offer BAAs for organizations, but it's designed for corporate use, not clinical documentation, and lacks healthcare-specific features.
7. Can I use AI to help with non-clinical tasks like scheduling or billing? Yes, as long as the AI tool is HIPAA-compliant and has a BAA. For tasks involving PHI (like appointment reminders with patient names), you must use compliant tools.
8. What should I do if I've already used ChatGPT with PHI? Stop immediately. Document the breach, notify your compliance officer or supervisor, and follow your organization's breach notification procedures. You may need to notify affected clients and regulatory authorities depending on the extent of the breach.
Related Reading on ClinikEHR
AI & Clinical Documentation:
- AI Revolution in Clinical Notes
- AI Clinical Notes for Therapists
- Top 5 AI Clinical Notes
- Free AI Clinical Notes Generator
- How to Get Started with AI Clinical Notes
- Best AI Note Taking for Psychiatry 2026
- Best AI Tools for Human-Sounding Notes 2026
Compliance & Ethics:
Practice Management:
- 15 Best Clinical Notes Software 2026
- EHR for Therapist: Complete Guide
- Top 5 EHR for Mental Health 2026
- Reduce Documentation Time Without Cutting Corners 2026
Ready to Use AI the Right Way?
Discover how a HIPAA-compliant AI can transform your documentation workflow. Try ClinikEHR’s Clinical Note AI for free and experience the difference.
Try the Free AI Note GeneratorStay in the loop
Subscribe to our newsletter for the latest updates on healthcare technology, HIPAA compliance, and exclusive content delivered straight to your inbox.