Module 8: The Malpractice Reality (Stakes & Accountability)
I Pay Malpractice Insurance. OpenAI Doesn't. And That Changes Everything.
Introduction
Let me tell you about a phone call I got from a malpractice attorney.
Not mine—a plaintiff’s attorney. She was working a case against another physician, and she wanted to understand AI’s role in medicine. Specifically, she wanted to know: if a patient uses AI, gets bad advice, delays care, and suffers harm, who’s responsible?
I told her the truth: it depends on what the physician did.
“But the AI told the patient to wait,” she said. “The AI was wrong.”
“And?” I replied. “If the patient came to the physician and the physician examined them and missed the same thing, that’s potential malpractice. If the patient never came to the physician because AI told them not to, that’s probably not malpractice—the physician never had the opportunity to evaluate. If the patient came, mentioned the AI advice, and the physician dismissed them without adequate evaluation because ‘AI already said it’s nothing serious,’ that’s definitely malpractice.”
There was a long pause.
“So the AI company isn’t liable?”
“Have you read their terms of service?”
She hadn’t. I have. They’re remarkable documents—pages of language that essentially say: “This is not medical advice. Do not make healthcare decisions based on this output. We are not responsible for anything that happens if you do.”
OpenAI has disclaimers. Anthropic has disclaimers. Google has disclaimers. Every AI company that produces tools patients use for medical questions has carefully crafted legal language ensuring that when things go wrong, the liability doesn’t land on them.
You know who doesn’t have those disclaimers? You.
You have a medical license. You carry malpractice insurance. You swore an oath. When you see a patient, you assume responsibility for their care—regardless of what AI told them beforehand, regardless of what they believe based on algorithmic output, regardless of whether the AI was right or wrong.
The liability asymmetry is staggering. AI companies build tools used by millions for medical decisions. They bear no consequences when those tools fail. You see one patient at a time, and you bear all the consequences when anything goes wrong in that encounter.
This isn’t fair. It’s reality.
8.1 The Liability Asymmetry
Let me be crystal clear about something: when AI gets medical information wrong and a patient is harmed, the AI company faces zero liability in the vast majority of cases.
This isn’t speculation. This is how the legal landscape currently works.
AI companies have:
- Terms of service explicitly disclaiming medical advice
- No medical license to lose
- No malpractice insurance paying claims
- No personal liability exposure
- No regulatory oversight as healthcare providers
You have:
- A medical license that can be suspended or revoked
- Malpractice insurance with premiums that rise after claims
- Personal liability exposure in cases of gross negligence
- Regulatory oversight from medical boards
- Ethical obligations under your oath
This asymmetry creates perverse incentives. AI companies can optimize for user engagement, confident-sounding outputs, and broad applicability—because they don’t pay when those confident outputs are wrong. You optimize for patient safety, accurate diagnosis, and appropriate treatment—because you do pay when things go wrong.
The legal doctrine hasn’t caught up with the technology. There’s no “AI malpractice.” There’s no mechanism for patients to sue ChatGPT when it tells them their chest pain is probably nothing and they have an MI. The disclaimers are clear, the liability is waived, and the harm falls on patients who can’t recover from the AI company.
The only deep pocket in the room is you.
8.2 AI Is Johnny Mnemonic, Not Dr. House
Here’s a framework that helps me maintain appropriate relationship with AI:
AI is Johnny Mnemonic—a data courier, a knowledge bus. It can access information faster than you can, recall more details than you can, and process patterns across datasets larger than you could review in a lifetime. That’s genuinely valuable.
But Johnny Mnemonic doesn’t practice medicine. He carries data. He doesn’t make decisions with that data. He doesn’t bear responsibility for how the data is used. He’s infrastructure, not practitioner.
You are the physician. You examine patients. You integrate information with physical findings, contextual factors, clinical experience, and professional judgment. You make decisions. You face consequences.
This distinction matters because the interface of modern AI makes it feel like you’re consulting a colleague. ChatGPT responds in conversational prose. It says “I think” and “I would consider” and “In my assessment.” It mimics the language of professional opinion.
But it’s not opinion. It’s pattern-completion. AI doesn’t “think” in any meaningful sense. It generates text that looks like thinking based on patterns in training data. When AI says “I would consider X,” it’s not expressing a clinical judgment it will stand behind. It’s producing text that resembles how a clinician might phrase advice.
The moment you start treating AI output as clinical judgment rather than data retrieval, you’re in dangerous territory. You’re importing AI’s confidence into your decision-making without importing AI’s accountability—because there is no AI accountability.
Stay clear on roles:
- AI provides information → You evaluate information
- AI suggests possibilities → You determine applicability
- AI generates text → You make decisions
- AI faces no consequences → You face all of them
8.3 The Decision-Making Authority Principle
Here’s the principle I want you to internalize: You cannot outsource clinical judgment. You cannot delegate decision-making to systems that bear no accountability for those decisions.
This seems obvious stated plainly. But in practice, it’s easy to slide.
The slide happens like this: Patient presents with symptoms. AI gave them an assessment. The assessment sounds reasonable. Your exam doesn’t contradict it. You’re busy. The AI already did the differential diagnosis work. You confirm the AI assessment and move on.
What just happened? You let AI’s pattern-matching substitute for your clinical reasoning. You adopted AI’s conclusion without independently reaching it. You used AI as a consultant whose opinion carries weight.
But AI isn’t a consultant. Consultants are physicians who bear responsibility for their recommendations. If a cardiologist tells you a patient doesn’t need intervention and the patient has an MI, that cardiologist faces liability exposure. Their recommendation carries weight because they carry risk.
AI’s recommendation carries no weight of accountability. AI will say the same thing regardless of consequences. AI doesn’t lie awake worrying about the patient it misdiagnosed. AI doesn’t adjust its approach after watching a patient crash.
This means AI recommendations deserve different treatment than consultant recommendations:
- You should verify AI assessments independently, not assume their validity
- You should document your independent reasoning, not defer to AI’s logic
- You should treat AI as information source, not decision support
- You should maintain full clinical ownership of every decision
The patient’s outcome is your responsibility. The defense “AI said it was probably benign” will not protect you in a malpractice case. It will damage you—it shows you abdicated judgment to a system with no accountability.
8.4 Documentation in the AI Age
Documentation has always been important. In the AI age, it’s essential—and the requirements have evolved.
Here’s what you need to document when AI is part of the clinical picture:
Document that AI was involved:
“Patient reports consulting AI (ChatGPT) regarding symptoms prior to presentation.”
This establishes that AI information entered the encounter.
Document what AI told them (if relevant):
“Patient states AI suggested possible gastritis. Patient concerned this assessment was incorrect.”
This shows you were aware of the AI assessment and addressed it specifically.
Document your independent assessment:
“Physical examination revealed [findings] not available through AI assessment. Based on clinical evaluation, diagnosis is [X], which [differs from/confirms] AI suggestion for the following reasons: [specific clinical reasoning].”
This is crucial. It establishes that you reached your conclusion through clinical examination and professional judgment, not by rubber-stamping AI’s pattern-matching.
Document patient education:
“Discussed AI limitations with patient. Specifically addressed that AI cannot perform physical examination and may miss [relevant findings]. Patient verbalized understanding.”
Document your decision-making authority:
“Final clinical assessment and treatment plan determined by physician evaluation, integrating but not deferring to patient’s AI-sourced information.”
This language explicitly establishes that you made the decision.
8.5 When AI Has Already Caused Harm
Here’s the scenario you’ll face repeatedly: patient presents with harm that resulted, at least in part, from AI advice. They delayed care. They stopped medication. They self-treated. By the time they reach you, damage is done.
How do you handle this?
First: Do not lecture.
The patient is already sick. They’re already scared. They may already feel stupid for trusting AI. Piling on with “This is why you can’t trust AI” accomplishes nothing except damaging your relationship and their future willingness to disclose AI use.
Second: Treat the patient.
Medical care first. Always. The teachable moment comes after they’re stable, if it comes at all.
Third: Document factually.
“Patient reports delaying presentation by approximately 72 hours based on AI assessment (ChatGPT) that symptoms were likely viral and would self-resolve. On presentation, findings consistent with bacterial infection requiring [treatment]. Patient now understands the limitations of AI assessment for this type of presentation.”
This documentation is non-judgmental but factually complete.
Fourth: Teach, don’t shame.
When the patient is stable and receptive:
“I understand why you asked AI. Most people do now. But here’s what AI couldn’t detect: your vital signs, your perfusion, how you looked when you walked in the door. Those are the findings that told me this wasn’t viral. Next time, here’s your rule: if AI says wait but you’re getting worse instead of better, come in anyway. Your body’s sensors override AI’s pattern-matching.”
Fifth: Consider the systems issue.
Why did this patient rely on AI instead of contacting your office? Was it access? Cost? Fear of bothering you? AI fills gaps. When patients use AI dangerously, it’s often because we’ve created the conditions where AI seems like the best available option.
8.6 Protecting Yourself Proactively
Beyond documentation, there are practice-level protections you should implement:
Create clear guidance for your patients about AI use:
Consider adding to your patient education materials:
“We understand that many patients research symptoms using AI tools before appointments. This is reasonable for general information. However, AI cannot examine you, doesn’t know your full medical history, and cannot make clinical decisions. Please use AI for background information only, not for decisions about whether to seek care, change medications, or modify treatments. When in doubt, contact our office.”
Establish communication channels that compete with AI:
Patients use AI because it’s available when you’re not. Consider:
- Patient portal messaging with reasonable response times
- Nurse triage lines for after-hours questions
- Clear guidance on what warrants same-day contact
- Explicit permission to call with concerns
If you’re more accessible, AI becomes less necessary for the dangerous use cases.
Train staff to ask about AI:
Your medical assistant can ask during rooming: “Did you research your symptoms online or with AI before coming in today?” This normalizes the question and gets the information documented before you enter.
Know your malpractice coverage:
Talk to your malpractice carrier about AI-related liability. Understand what’s covered, what’s not, and what documentation they recommend. Carriers are developing guidance on this rapidly; stay current.
Clinical Scenarios
Scenario 1: The Delayed Diagnosis
Presentation: 45-year-old man presents with three weeks of fatigue and unintentional weight loss. He looks cachectic. On further history, symptoms started two months ago but he didn’t seek care because “ChatGPT said it was probably stress and recommended lifestyle modifications.”
What AI Told Him: Two months ago, he described fatigue, mild weight loss, and work stress to ChatGPT. AI suggested stress-related symptoms, recommended sleep hygiene, exercise, and possibly counseling. It mentioned seeing a doctor “if symptoms persist or worsen significantly.”
What AI Got Right:
- Stress is a common cause of fatigue
- Lifestyle modifications are reasonable first steps
- It recommended seeing a doctor if symptoms persisted
What AI Got Wrong:
- No mechanism to assess severity of weight loss
- No ability to detect cachexia versus intentional weight loss
- “Persist or worsen significantly” is subjective—patient didn’t recognize his trajectory as significant
- Two months of delay in what will prove to be malignancy
Your Exam Findings: Cachexia. Hepatomegaly. Suspicious lymphadenopathy. This needs urgent workup.
Integration Dialogue:
You: “I need to examine you thoroughly and order some tests. While I’m doing that, let me explain something about what happened over the past two months.”
Patient: “I know I should have come in sooner…”
You: “Let’s not focus on that. You did what seemed reasonable based on the information you had. AI gave you general guidance that’s right for most people—most fatigue is stress, most weight loss is lifestyle. But AI couldn’t see what I’m seeing now: this isn’t the weight loss of someone who forgot to eat because they’re busy. This is a different kind of weight loss. AI couldn’t weigh you, couldn’t feel your abdomen, couldn’t assess your muscle mass versus two months ago.”
Outcome: Workup revealed Stage III colon cancer. Resected with adjuvant chemotherapy. Patient doing well at one year.
Scenario 2: The Medication Interaction
Presentation: 67-year-old woman on multiple medications including warfarin, presents with INR of 8.3 and bruising. She stopped her vitamin K supplements two weeks ago because “AI said they interfere with warfarin and should be avoided.”
What AI Told Her: She asked ChatGPT about vitamin K and warfarin. AI correctly noted that vitamin K affects warfarin efficacy and that “patients on warfarin should maintain consistent vitamin K intake rather than taking high-dose supplements.”
The Problem: AI gave accurate general information that led to a dangerous specific decision. Her vitamin K supplement was part of her warfarin management plan—her dose was titrated accounting for it.
Integration Dialogue:
You: “Let me explain what happened, because this is important and it’s not your fault.”
Patient: “I was trying to be careful…”
You: “I know. And in a vacuum, AI told you something true: vitamin K affects warfarin. But here’s what AI couldn’t know: you weren’t randomly taking vitamin K. It was part of your regimen. We adjusted your warfarin dose specifically to account for it. When you stopped the vitamin K, your warfarin became much stronger relative to your system. That’s why your INR went up. This is a perfect example of why medication changes need to go through us, not through AI. AI knows general facts. We know your specific facts.”
Outcome: INR corrected. Warfarin and vitamin K resumed at previous doses. Patient now calls before any medication changes.
Scenario 3: The Liability Near-Miss
Presentation: 38-year-old woman, four weeks postpartum, presents with leg swelling. She almost didn’t come in because “AI said swelling is normal after pregnancy.”
What AI Told Her: She asked ChatGPT about leg swelling postpartum. AI noted that lower extremity edema is common in the postpartum period and typically resolves within a few weeks. It recommended leg elevation and hydration.
What AI Got Wrong: She has unilateral swelling. Four weeks postpartum is peak DVT risk. She mentioned the asymmetry, but AI focused on “postpartum swelling” pattern-match.
Your Exam Findings: Left leg 3cm larger than right. Calf tenderness. Positive Homans’ sign. This is DVT until proven otherwise.
Integration Dialogue:
You: “I’m really glad you came in anyway. Let me tell you what I’m concerned about.”
Patient: “AI said the swelling was normal…”
You: “AI heard ‘postpartum’ and ‘leg swelling’ and matched it to the most common pattern: normal postpartum edema. But look at what I see that AI couldn’t: your left leg is significantly larger than your right. That’s not normal postpartum edema—that’s asymmetric, which is a different thing entirely. Four weeks after delivery is actually the highest-risk period for blood clots in your leg. AI couldn’t measure your legs. I can.”
Outcome: Ultrasound confirmed proximal DVT. Anticoagulation initiated. No PE. Patient did well.
Practical Tools
Documentation Templates
Basic AI Involvement:
Patient consulted AI (ChatGPT/similar) regarding [symptoms] prior to presentation. AI assessment of [X] was [consistent with/inconsistent with] clinical findings. Physical examination revealed [findings not available to AI]. Clinical impression: [diagnosis]. Discussed AI limitations regarding [specific limitation].
AI-Related Delay in Care:
Patient reports [X]-day delay in seeking evaluation based on AI assessment suggesting [AI conclusion]. On presentation, clinical findings indicate [actual diagnosis]. AI assessment was limited by [specific limitation—inability to examine, lack of patient-specific context, etc.]. Patient educated on appropriate use of AI for medical questions and thresholds for seeking in-person evaluation.
AI-Related Medication Issue:
Patient modified medication regimen based on AI-sourced information regarding [topic]. AI provided [general information] that was [accurate/inaccurate] at population level but [inappropriate/dangerous] given patient’s specific [medication regimen/medical history/circumstances]. Intervention: [what you did]. Patient counseled to contact office before any medication modifications regardless of AI guidance.
Talking Points for Patients
On liability:
“AI companies don’t pay when their advice is wrong. I do. That’s why I need to be the one making clinical decisions—not because I don’t trust technology, but because I’m the one who’s accountable.”
On roles:
“Think of AI like a research assistant who reads really fast. It can gather information, but it can’t examine you, it doesn’t know your full history, and it doesn’t bear any responsibility for being wrong. That’s my job.”
On decision authority:
“I’ll always consider what AI told you—it’s useful information. But the final call is mine, because I’m the one who can see you, touch you, and take responsibility for what happens next.”
Implementation Guide
Immediate Actions
This week: Add AI-involvement question to your intake process. Document AI use when disclosed.
This month: Review your documentation templates. Ensure they include language establishing your independent clinical reasoning.
This quarter: Develop patient education materials about appropriate AI use. Consider adding to new patient paperwork.
Long-Term Practice Evolution
Build accessibility: Create alternatives to AI for after-hours questions—portal messaging, nurse lines, clear guidance on what warrants contact.
Train staff: Ensure everyone understands that AI disclosure is valuable, not problematic. Create culture where patients feel safe telling you what AI told them.
Stay current: Malpractice guidance on AI is evolving. Check with your carrier annually. Read specialty society guidance as it develops.
Pitfalls to Avoid
Over-documenting defensively: Your notes should demonstrate clinical reasoning, not paranoia about AI. Document what matters, not every possible liability scenario.
Blaming patients: Documentation should be factual, not judgmental. “Patient delayed care based on AI advice” is appropriate. “Patient foolishly trusted AI” is not.
Ignoring the systemic issue: If patients are using AI for things they should call you about, ask why. Access problems are yours to solve, not patients’ to navigate.
Key Takeaways
- Liability is asymmetric. AI companies have disclaimers. You have a medical license. When AI-influenced care goes wrong, you bear the consequences.
- AI is a data courier, not a consultant. Johnny Mnemonic carries information; he doesn't practice medicine. Treat AI output as data, not clinical judgment.
- Document AI involvement. Note when patients consulted AI, what it told them, and how your assessment differed. Establish your independent reasoning.
- Never outsource clinical judgment. You cannot delegate decision-making to systems with no accountability. AI informs; you decide.
- When AI causes harm, treat first and teach later. Lectures don't help sick patients. Education comes after stabilization.
- Build systems that compete with AI. Accessible alternatives reduce dangerous AI reliance. Be available enough that AI becomes background, not primary care.
Final Remarks
Here’s the reality I live with every day: I pay malpractice insurance. OpenAI doesn’t.
That single fact shapes everything about how I integrate AI into my practice.
AI companies can optimize for confident outputs, broad applicability, user engagement. They can build systems that always have an answer, never express uncertainty, and generate plausible-sounding responses regardless of accuracy. They face no consequences when those systems fail.
I can’t optimize that way. I have to optimize for patient safety, diagnostic accuracy, and appropriate care. I have to maintain the judgment to override confident-sounding AI when my clinical findings tell a different story. I have to bear responsibility for every decision, whether AI informed it or not.
That’s not a complaint. That’s the job.
The physicians who thrive in the AI age will be the ones who understand this asymmetry deeply. Who treat AI as a useful tool but never as a decision-maker. Who document their reasoning clearly. Who maintain the humility to let AI inform their thinking and the confidence to override it when necessary.
AI will get better. AI will become more integrated. AI will influence more patient decisions before they ever reach your office.
But AI will not pay when things go wrong. You will.
Remember that. Practice accordingly.
The author is a facial plastic surgeon who has paid malpractice insurance for 25 years and plans to keep paying it until he retires, and who believes that accountability is the feature, not the bug, of human medicine. TheDude notes that he also lacks malpractice insurance, which is one of many reasons he abides within his validated knowledge domains rather than pretending to practice medicine. He strongly recommends you do the practicing, not him.
