Can You Use ChatGPT or AI for Therapy Notes? What’s Ethical and What’s Not
Is it safe or legal to use ChatGPT for clinical notes? This guide breaks down AI therapy notes ethics, HIPAA compliance, and the key differences between public AI and purpose-built tools like ClinikEHR.
It’s the question every tech-savvy therapist in the United States, UK, Canada, Australia, and Africa is asking: “Can I just use ChatGPT to write my therapy notes?” The temptation is understandable. Public AI tools are powerful, accessible, and seem like a quick fix for the endless burden of documentation. But the answer is far more complex than a simple yes or no, and getting it wrong can have serious consequences for your practice and your clients.
Using the wrong AI tool can expose you to significant legal and ethical risks. However, using the right AI tool can be transformative. This guide will clarify the misconceptions, explain what HIPAA really says, and show you how to use AI for documentation ethically and safely, with purpose-built solutions like ClinikEHR’s Clinical Note AI.
Why This Question Keeps Coming Up: The Allure of Easy Documentation
The rise of generative AI has been meteoric. Tools like ChatGPT, Gemini, and Claude are now part of the public consciousness. For clinicians drowning in paperwork, the appeal is obvious: a technology that can summarize text and generate human-like prose seems like the perfect antidote to documentation fatigue. This curiosity, however, often outpaces a clear understanding of the risks involved. The convenience of a public AI tool can easily overshadow the fundamental requirements of handling Protected Health Information (PHI).
What HIPAA Actually Says About AI Tools
HIPAA doesn’t mention “ChatGPT” by name, but its principles are timeless and apply to any technology that handles PHI. The two most important concepts for a therapist to understand are:
- Business Associate Agreements (BAA): Under HIPAA, any third-party vendor that creates, receives, maintains, or transmits PHI on your behalf is a “Business Associate.” You are legally required to have a signed BAA with them. This agreement contractually obligates the vendor to protect PHI according to HIPAA standards. Public AI tools like the free version of ChatGPT do not and will not sign a BAA. Using them with PHI is a direct violation.
- Minimum Necessary Standard: This principle dictates that you should only use, disclose, or request the minimum amount of PHI necessary to accomplish a specific task. Pasting an entire session transcript into a public tool goes far beyond this standard.
Using a non-compliant tool for any task involving PHI is a direct violation of HIPAA in the US, and similar principles apply under GDPR (UK/EU), PIPEDA (Canada), and other data protection laws in Australia and Africa. The penalties can include severe fines, licensure issues, and irreparable damage to your professional reputation.
ChatGPT vs. Purpose-Built Clinical AI: A Critical Comparison
The difference between a public AI model and a HIPAA-compliant clinical AI tool is not the underlying technology—it's the entire security and privacy infrastructure built around it.
| Feature | ChatGPT (Free Public Version) | ClinikEHR’s Clinical Note AI |
|---|---|---|
| HIPAA Compliance | No. Does not sign a BAA. | Yes. A BAA is provided to all users. |
| Data Privacy | Conversations may be used for model training. | Your data is private and is never used for training. |
| PHI Protection | No specific safeguards to identify or protect PHI. | Platform is designed to secure PHI with end-to-end encryption. |
| Clinical Accuracy | Not trained on clinical language or note formats. | Fine-tuned on clinical terminology and SOAP/DAP/BIP formats. |
| Workflow Integration | Requires manual copy-pasting, creating risk at each step. | Fully integrated into your EHR workflow, ensuring a secure chain of custody. |
The Bottom Line: Pasting PHI into ChatGPT is the digital equivalent of discussing a client’s case in a crowded coffee shop. It’s a breach of confidentiality, plain and simple.
Ethical Use of AI in Clinical Documentation: A Framework
Beyond the law, our ethical codes demand a thoughtful approach to technology. The core principles of confidentiality, informed consent, and clinician accountability are non-negotiable.
- Confidentiality: Your primary ethical duty is to protect your client’s privacy. This means using only secure, HIPAA-compliant tools for any task involving PHI. There is no ethical gray area here.
- Informed Consent: While you may not need to get consent for every internal software you use, transparency is key. It's good practice to inform clients in your intake paperwork that you use secure, HIPAA-compliant software systems to manage their records, which may include AI-powered tools to assist with administrative tasks.
- Clinician Accountability: An AI is a tool, not a colleague. You, the clinician, are 100% responsible for the accuracy and integrity of the final clinical note. The AI generates a draft; you provide the expert review, clinical nuance, and final signature. Never trust an AI-generated note blindly.
Examples of Ethical vs. Risky Use Cases
Risky & Unethical Use Case: A therapist in the UK records a session on their phone, uses a free online service to get a transcript, and pastes the entire transcript into ChatGPT with the prompt, “Turn this into a SOAP note.”
- Why it’s wrong: PHI was shared with two non-compliant vendors. No BAA is in place, violating both HIPAA (if serving US clients) and GDPR principles. The data may be stored indefinitely and used for training.
Safe & Ethical Use Case: A therapist in Australia uses the integrated ClinikEHR mobile app. After a session, they dictate a 60-second summary of their thoughts into the app.
- Why it’s right: The audio is processed within ClinikEHR’s secure, compliant environment. The data is encrypted, and a BAA (or equivalent data processing agreement) is in place. The AI generates a structured SOAP note draft, which the therapist then reviews, edits for nuance, and signs—all within the same secure platform.
Product Insight: ClinikEHR’s Commitment to Ethical AI
At ClinikEHR, we believe AI should empower clinicians, not endanger them. Our Clinical Note AI was built with a security-first mindset:
- HIPAA and Global Compliance: We provide a BAA and adhere to the highest standards of data protection, including GDPR, PIPEDA, and more.
- Zero-Knowledge Architecture: Your data is encrypted in a way that even we cannot access it.
- No Data for Training: We will never use your or your clients’ data to train our AI models.
- Integrated and Secure: By keeping the entire workflow within one platform, we eliminate the risks associated with copy-pasting between different applications.
Conclusion: AI Isn’t the Problem — Using It Wrong Is
Artificial intelligence is not inherently risky for healthcare; in fact, it holds immense promise to reduce burnout and improve care. The risk comes from applying consumer-grade tools to professional, high-stakes tasks. Using ChatGPT for therapy notes is a dangerous shortcut that compromises client confidentiality and violates the law.
The ethical path forward is to embrace purpose-built, HIPAA-compliant AI solutions that are designed to protect PHI while delivering the time-saving benefits you need. By choosing the right tools, you can uphold your ethical duties, protect your practice, and reclaim your time.
Frequently Asked Questions (FAQs)
1. Is it ever okay to use ChatGPT for non-PHI tasks in my practice? Yes. Using ChatGPT for brainstorming blog post ideas, drafting marketing copy, or creating generic client handouts is generally acceptable, as long as no PHI is entered.
2. What are the specific penalties for a HIPAA violation involving AI? Penalties can range from thousands to millions of dollars, depending on the level of negligence. It can also lead to corrective action plans, loss of licensure, and civil lawsuits.
3. Does using a VPN make it safe to use ChatGPT with PHI? No. A VPN encrypts your internet connection but does nothing to change the fact that the service you are sending PHI to (ChatGPT) is not HIPAA-compliant and will not sign a BAA.
4. Can I de-identify notes before pasting them into ChatGPT? While theoretically possible, true de-identification is extremely difficult and prone to error. A client's story, location, or unique circumstances can be identifying. It is not a recommended or reliable workflow.
5. How do I know if an AI tool is truly HIPAA-compliant? Ask them directly: "Will you sign a Business Associate Agreement (BAA)?" If the answer is no, or they don't know what that is, the tool is not compliant and should not be used with PHI.
Ready to Use AI the Right Way?
Discover how a HIPAA-compliant AI can transform your documentation workflow. Try ClinikEHR’s Clinical Note AI for free and experience the difference.
Try the Free AI Note GeneratorStay in the loop
Subscribe to our newsletter for the latest updates on healthcare technology, HIPAA compliance, and exclusive content delivered straight to your inbox.