Can You Use ChatGPT or AI for Therapy Notes? What’s Ethical and What’s Not
Is it safe or legal to use ChatGPT for clinical notes? This guide breaks down AI therapy notes ethics, HIPAA compliance, and the key differences between public AI and purpose-built tools like ClinikEHR.
By Dr. Jethro Magaji
Duration
17 MINSQuick Answer: Can You Use ChatGPT for Therapy Notes?
It depends on which version you use. As of April 2026, OpenAI has launched ChatGPT for Clinicians — a dedicated version designed for clinical tasks including documentation. This changes the landscape significantly, but the free consumer version of ChatGPT remains off-limits for PHI.
ChatGPT for Clinicians (NEW — launched 2026): ✅ Free for verified US physicians, NPs, PAs, and pharmacists ✅ Optional HIPAA support via Business Associate Agreement (BAA) for eligible accounts ✅ Conversations are NOT used to train models ✅ Multi-factor authentication and account security ✅ Tested by physicians: 99.6% of responses rated safe and accurate ✅ Skills for repeatable clinical workflows (referral letters, prior auth, patient instructions) ✅ Deep research across peer-reviewed medical journals with citations
Regular ChatGPT (Free/Plus — still risky for PHI): ❌ No BAA available for individual consumer accounts ❌ Data may be used for model training (unless opted out) ❌ No PHI-specific safeguards ❌ Not designed for clinical documentation workflows ❌ Severe penalties for HIPAA violations: fines up to $1.5 million/year
Bottom line: If you are a verified US clinician, ChatGPT for Clinicians with a BAA is now a legitimate option for clinical documentation support. For everyone else — or if you want a fully integrated EHR workflow — purpose-built clinical AI like ClinikEHR's Clinical Note AI remains the safest and most practical choice.
Use AI Safely and Legally
The Complete Guide: Why This Matters for Your Practice
It’s the question every tech-savvy therapist in the United States, UK, Canada, Australia, and Africa is asking: “Can I just use ChatGPT to write my therapy notes?” The temptation is understandable. Public AI tools are powerful, accessible, and seem like a quick fix for the endless burden of documentation. But the answer is far more complex than a simple yes or no, and getting it wrong can have serious consequences for your practice and your clients.
Using the wrong AI tool can expose you to significant legal and ethical risks. However, using the right AI tool can be transformative. This guide will clarify the misconceptions, explain what HIPAA really says, and show you how to use AI for documentation ethically and safely, with purpose-built solutions like ClinikEHR’s Clinical Note AI.
Why This Question Keeps Coming Up: The Allure of Easy Documentation
The rise of generative AI has been meteoric. Tools like ChatGPT, Gemini, and Claude are now part of the public consciousness. For clinicians drowning in paperwork, the appeal is obvious: a technology that can summarize text and generate human-like prose seems like the perfect antidote to documentation fatigue. This curiosity, however, often outpaces a clear understanding of the risks involved. The convenience of a public AI tool can easily overshadow the fundamental requirements of handling Protected Health Information (PHI).
What HIPAA Actually Says About AI Tools
HIPAA doesn’t mention “ChatGPT” by name, but its principles are timeless and apply to any technology that handles PHI. The two most important concepts for a therapist to understand are:
- Business Associate Agreements (BAA): Under HIPAA, any third-party vendor that creates, receives, maintains, or transmits PHI on your behalf is a “Business Associate.” You are legally required to have a signed BAA with them. This agreement contractually obligates the vendor to protect PHI according to HIPAA standards. Public AI tools like the free version of ChatGPT do not and will not sign a BAA. Using them with PHI is a direct violation.
- Minimum Necessary Standard: This principle dictates that you should only use, disclose, or request the minimum amount of PHI necessary to accomplish a specific task. Pasting an entire session transcript into a public tool goes far beyond this standard.
Using a non-compliant tool for any task involving PHI is a direct violation of HIPAA in the US, and similar principles apply under GDPR (UK/EU), PIPEDA (Canada), and other data protection laws in Australia and Africa. The penalties can include severe fines, licensure issues, and irreparable damage to your professional reputation.
2026 Update: OpenAI Launches ChatGPT for Clinicians
In a significant development, OpenAI announced ChatGPT for Clinicians — a dedicated version of ChatGPT designed specifically for clinical tasks. This changes the conversation around AI and therapy notes considerably.
What ChatGPT for Clinicians offers:
- Free access to frontier AI models for verified US physicians, NPs, PAs, and pharmacists
- Optional HIPAA support via a Business Associate Agreement (BAA) for eligible accounts
- No model training on conversations — your data stays private
- Clinical workflow skills — reusable templates for referral letters, prior authorization, patient instructions, and documentation
- Trusted clinical search with real-time, cited answers from peer-reviewed medical sources
- Deep research across medical journals with source citations
- CME credits from eligible evidence reviews — no separate courses needed
- 99.6% safety rating — physician advisors tested 6,924 conversations and rated 99.6% of responses as safe and accurate
What this means for therapists:
This is a genuine step forward. For the first time, a major consumer AI platform offers HIPAA-compliant infrastructure for clinical use. However, there are important caveats:
- Currently US-only — verified US clinicians only (expansion planned via the Better Evidence Network)
- BAA is optional and for eligible accounts — you must specifically enable HIPAA support; the default consumer ChatGPT still has no BAA
- Not an EHR — ChatGPT for Clinicians generates text but doesn't integrate with your scheduling, billing, patient portal, or clinical workflow
- Still requires clinician review — OpenAI explicitly states it's "designed to support clinicians with information, not replace their judgment or expertise"
- Therapist-specific features are limited — it's built for general clinical use, not specifically for mental health documentation formats like SOAP, DAP, BIRP, or GIRP
Our recommendation: If you're a verified US clinician, ChatGPT for Clinicians with BAA enabled is now a legitimate tool for clinical documentation support. But for a complete, integrated workflow — where AI notes flow directly into your EHR, connect to scheduling, billing, and patient records — purpose-built solutions like ClinikEHR remain more practical for day-to-day practice management.
ChatGPT vs. Purpose-Built Clinical AI: A Critical Comparison (Updated 2026)
The difference between a public AI model and a HIPAA-compliant clinical AI tool is not the underlying technology—it's the entire security and privacy infrastructure built around it.
| Feature | ChatGPT (Free/Plus) | ChatGPT for Clinicians | ClinikEHR Clinical Note AI |
|---|---|---|---|
| HIPAA Compliance | No BAA available | Optional BAA for eligible accounts | Yes, BAA provided to all users |
| Data Privacy | May be used for training | Not used for training | Never used for training |
| PHI Protection | No specific safeguards | MFA, account security | End-to-end encryption |
| Clinical Accuracy | General-purpose AI | Physician-tested (99.6% safe) | Fine-tuned for SOAP/DAP/BIRP |
| Workflow Integration | Manual copy-paste | Standalone chat interface | Fully integrated into EHR |
| Scheduling/Billing | No | No | Yes, built-in |
| Patient Records | No | No | Yes, comprehensive |
| Cost | Free/$20/month | Free for verified US clinicians | Free plan available |
| Availability | Global | US-only (expanding) | Global |
| Mental Health Focus | Generic | General clinical | Mental health-specific templates |
The Bottom Line: The free consumer version of ChatGPT remains off-limits for PHI. ChatGPT for Clinicians with BAA is now a legitimate option for verified US clinicians. For a fully integrated clinical workflow, purpose-built tools like ClinikEHR remain the most practical choice.
Ethical Use of AI in Clinical Documentation: A Framework
Beyond the law, our ethical codes demand a thoughtful approach to technology. The core principles of confidentiality, informed consent, and clinician accountability are non-negotiable.
- Confidentiality: Your primary ethical duty is to protect your client’s privacy. This means using only secure, HIPAA-compliant tools for any task involving PHI. There is no ethical gray area here.
- Informed Consent: While you may not need to get consent for every internal software you use, transparency is key. It's good practice to inform clients in your intake paperwork that you use secure, HIPAA-compliant software systems to manage their records, which may include AI-powered tools to assist with administrative tasks.
- Clinician Accountability: An AI is a tool, not a colleague. You, the clinician, are 100% responsible for the accuracy and integrity of the final clinical note. The AI generates a draft; you provide the expert review, clinical nuance, and final signature. Never trust an AI-generated note blindly.
Examples of Ethical vs. Risky Use Cases
Risky & Unethical Use Case: A therapist in the UK records a session on their phone, uses a free online service to get a transcript, and pastes the entire transcript into ChatGPT with the prompt, “Turn this into a SOAP note.”
- Why it’s wrong: PHI was shared with two non-compliant vendors. No BAA is in place, violating both HIPAA (if serving US clients) and GDPR principles. The data may be stored indefinitely and used for training.
Safe & Ethical Use Case: A therapist in Australia uses the integrated ClinikEHR mobile app. After a session, they dictate a 60-second summary of their thoughts into the app.
- Why it’s right: The audio is processed within ClinikEHR’s secure, compliant environment. The data is encrypted, and a BAA (or equivalent data processing agreement) is in place. The AI generates a structured SOAP note draft, which the therapist then reviews, edits for nuance, and signs—all within the same secure platform.
Product Insight: ClinikEHR’s Commitment to Ethical AI
At ClinikEHR, we believe AI should empower clinicians, not endanger them. Our Clinical Note AI was built with a security-first mindset:
- HIPAA and Global Compliance: We provide a BAA and adhere to the highest standards of data protection, including GDPR, PIPEDA, and more.
- Zero-Knowledge Architecture: Your data is encrypted in a way that even we cannot access it.
- No Data for Training: We will never use your or your clients’ data to train our AI models.
- Integrated and Secure: By keeping the entire workflow within one platform, we eliminate the risks associated with copy-pasting between different applications.
Conclusion: The Landscape Has Changed -- But Caution Still Matters
The launch of ChatGPT for Clinicians in 2026 marks a genuine shift. For the first time, a major consumer AI platform offers HIPAA-compliant infrastructure with BAA support, no model training on conversations, and physician-tested accuracy (99.6% safe rating across 6,924 test conversations). Verified US clinicians now have a legitimate, free option for AI-assisted documentation.
However, the core principles remain unchanged: (1) The free/Plus version of ChatGPT is still not safe for PHI. (2) ChatGPT for Clinicians is US-only for now. (3) It is not an EHR -- it generates text but does not integrate with scheduling, billing, or patient records. (4) Clinician accountability is non-negotiable -- AI generates drafts, you provide expert review. (5) For integrated practice management, purpose-built solutions like ClinikEHR that combine AI notes with scheduling, billing, telehealth, and patient records remain the most practical choice.### Frequently Asked Questions (FAQs)
1. Is it ever okay to use ChatGPT for non-PHI tasks in my practice? Yes. Using ChatGPT for brainstorming blog post ideas, drafting marketing copy, or creating generic client handouts is generally acceptable, as long as no PHI is entered.
2. What are the specific penalties for a HIPAA violation involving AI? Penalties can range from thousands to millions of dollars, depending on the level of negligence. It can also lead to corrective action plans, loss of licensure, and civil lawsuits.
3. Does using a VPN make it safe to use ChatGPT with PHI? No. A VPN encrypts your internet connection but does nothing to change the fact that the service you are sending PHI to (ChatGPT) is not HIPAA-compliant and will not sign a BAA.
4. Can I de-identify notes before pasting them into ChatGPT? While theoretically possible, true de-identification is extremely difficult and prone to error. A client's story, location, or unique circumstances can be identifying. It is not a recommended or reliable workflow.
5. How do I know if an AI tool is truly HIPAA-compliant? Ask them directly: "Will you sign a Business Associate Agreement (BAA)?" If the answer is no, or they don't know what that is, the tool is not compliant and should not be used with PHI.
6. What about ChatGPT Plus, ChatGPT Enterprise, or ChatGPT for Clinicians? ChatGPT Plus ($20/month) still does not sign BAAs for individual users. ChatGPT Enterprise may offer BAAs for organizations. The newest option, ChatGPT for Clinicians (launched 2026), is free for verified US physicians, NPs, PAs, and pharmacists, and offers optional HIPAA support via BAA for eligible accounts. Conversations are not used for training, and physician advisors rated 99.6% of responses as safe. However, it is a standalone chat tool — not an integrated EHR — and is currently US-only. For clinicians outside the US, or those needing integrated scheduling, billing, and patient records alongside AI notes, purpose-built solutions like ClinikEHR remain the better choice. Source: OpenAI blog, "Making ChatGPT better for clinicians"
7. Can I use AI to help with non-clinical tasks like scheduling or billing? Yes, as long as the AI tool is HIPAA-compliant and has a BAA. For tasks involving PHI (like appointment reminders with patient names), you must use compliant tools.
8. What should I do if I've already used ChatGPT with PHI? Stop immediately. Document the breach, notify your compliance officer or supervisor, and follow your organization's breach notification procedures. You may need to notify affected clients and regulatory authorities depending on the extent of the breach.
Disclaimer: This article is for informational purposes only and does not constitute legal, medical, or compliance advice. HIPAA regulations, AI tool capabilities, and vendor compliance status change frequently. Always verify the current compliance status of any AI tool directly with the vendor before using it with Protected Health Information (PHI). Consult with a healthcare compliance attorney or your organization's compliance officer for guidance specific to your practice. Information about ChatGPT for Clinicians is based on OpenAI's announcement and may have changed since this article was last updated. ClinikEHR and its authors shall not be held liable for any decisions made based on the information provided herein.
Related Reading on ClinikEHR
AI & Clinical Documentation:
- AI Revolution in Clinical Notes
- AI Clinical Notes for Therapists
- Top 5 AI Clinical Notes
- Free AI Clinical Notes Generator
- How to Get Started with AI Clinical Notes
- Best AI Note Taking for Psychiatry 2026
- Best AI Tools for Human-Sounding Notes 2026
Compliance & Ethics:
- HIPAA Compliance in Shared Office Spaces 2026
- Can You Use Google Workspace as an EHR?
- HIPAA Compliance for Developers
Practice Management:
- 15 Best Clinical Notes Software 2026
- EHR for Therapist: Complete Guide
- Top 5 EHR for Mental Health 2026
- Reduce Documentation Time Without Cutting Corners 2026
Ready to Use AI the Right Way?
Stay in the loop
Subscribe to our newsletter for the latest updates on healthcare technology, HIPAA compliance, and exclusive content delivered straight to your inbox.