Module 8: The Malpractice Reality for Nurses
Your License Is on the Line, AI's Disclaimer Isn't
The Lawsuit Named the Nurse
The deterioration algorithm showed “low risk.” The nurse documented “stable” based on the EHR’s auto-populated assessment field. The patient coded three hours later.
When the family sued, they didn’t sue the algorithm. They didn’t sue the EHR vendor. They didn’t sue the company that designed the deterioration prediction model.
They sued the nurse.
The allegation: failure to assess. The documentation showed “stable,” but the patient wasn’t stable. The algorithm said “low risk,” but the patient was deteriorating. The nurse relied on technology instead of nursing judgment, and a patient died.
The defense that “the computer said he was fine” didn’t fly. Because the computer doesn’t have a nursing license. The computer doesn’t have a professional duty to assess. The computer doesn’t bear responsibility for patient outcomes.
The nurse does.
8.1 Understanding Nursing Malpractice
Nursing malpractice requires four elements:
1. Duty: The nurse had a professional duty to the patient
2. Breach: The nurse failed to meet the standard of care
3. Causation: The breach caused or contributed to harm
4. Damages: The patient suffered actual harm
Where AI Complicates Each Element
Duty: Your duty is to assess, plan, implement, and evaluate nursing care. This duty doesn’t transfer to AI. Algorithms don’t have duty. You do.
Breach: Standard of care is what a reasonably prudent nurse would do. Following AI recommendations that contradict assessment is not what a reasonably prudent nurse would do.
Causation: “I was following the algorithm” doesn’t break the causal chain. If you implemented AI recommendation without independent judgment, you’re still causally responsible.
Damages: AI cannot cause damages—it has no legal personhood. The nurse who relied on AI is the one in the causal chain.
8.2 The Liability Asymmetry
Here’s the fundamental unfairness you’re navigating:
AI Companies:
✗ No nursing license to lose
✗ No professional board investigation
✗ Limited liability through Terms of Service
✗ Disclaimer: “Not intended to replace professional judgment”
✗ No malpractice insurance
✗ No personal consequences when AI fails
You:
✓ Professional license at risk
✓ State Board of Nursing investigation
✓ Malpractice liability (personal and professional)
✓ Potential criminal charges (for serious errors)
✓ Career consequences
✓ Personal moral burden
The Asymmetry: AI provides the recommendation. You bear the consequences. AI gets a disclaimer. You get a lawsuit.
The Disclaimer Defense That Doesn’t Work
Every clinical AI includes some version of: “This tool is for informational purposes only and does not constitute medical advice. Users should exercise professional judgment and not rely solely on this system’s recommendations.”
Notice what this means: the AI company knows their system shouldn’t be relied upon. They put it in writing.
So when you document “per algorithm recommendation” or “as suggested by CDSS,” you’re not protected. You’ve documented reliance on a system that explicitly says not to rely on it.
8.3 State Board of Nursing Implications
Beyond malpractice suits, there’s professional licensure.
What State Boards Care About:
Independent Judgment: Did the nurse exercise professional nursing judgment, or merely follow computer outputs?
Scope of Practice: Did the nurse use AI appropriately within nursing scope, or allow AI to dictate actions beyond nursing scope?
Documentation Standards: Does the documentation reflect actual nursing assessment, or mere transcription of AI outputs?
Professional Accountability: Did the nurse take accountability for care decisions, or defer accountability to technology?
Board Investigation Scenarios
Scenario 1: Nurse documents AI-suggested assessment verbatim without modification. Patient harm occurs. Board question: “Where is evidence of independent nursing assessment?”
Scenario 2: Nurse overrides repeated clinical decision support alerts without documentation. Patient harm occurs. Board question: “What was your clinical reasoning for these overrides?”
Scenario 3: Nurse follows AI treatment recommendation that exceeds nursing scope. Patient harm occurs. Board question: “Did you recognize this recommendation exceeded nursing scope?”
In each case, the board doesn’t evaluate the AI. They evaluate the nurse.
8.4 Documentation That Protects
Your documentation is your defense. Here’s how to document in the AI era.
The Golden Rule
Document YOUR assessment, YOUR reasoning, YOUR judgment.
AI recommendations are information you considered. Your documentation should show professional nursing judgment, not algorithmic compliance.
Documentation Patterns
Bad Documentation: “Per sepsis algorithm, patient at low risk. Continue monitoring per protocol.”
Problems:
- Defers to algorithm
- No independent assessment
- No clinical reasoning
- “Per protocol” without nursing judgment
Good Documentation: “Sepsis screening algorithm indicates low risk. Nursing assessment: Patient alert, skin warm and dry, vital signs within baseline, no new complaints. No clinical signs concerning for infection at this time. Continue q4h assessments; will reassess if condition changes.”
Better because:
- Notes algorithm input
- Documents independent assessment
- Shows clinical reasoning
- Demonstrates professional judgment
Bad Documentation: “Assessment auto-populated from EHR. Patient stable.”
Problems:
- Explicitly shows no independent assessment
- Auto-population doesn’t equal nursing judgment
- No verification of accuracy
Good Documentation: “Nursing assessment: [Your actual observations]. Patient [your determination of status based on assessment].”
Better because:
- Documents YOUR observations
- Reflects YOUR professional judgment
- No mention of auto-population (which shouldn’t be relied upon)
The AI Override Template
When you override AI recommendations:
“AI/CDSS recommended [X]. Based on nursing assessment including [specific observations], I determined [Y] was appropriate because [clinical reasoning]. [Specific follow-up actions].”
This documents:
- AI input acknowledged
- Independent assessment performed
- Clinical reasoning articulated
- Professional judgment exercised
8.5 When AI and Assessment Conflict
The hardest situations: AI says one thing, your assessment says another.
Scenario: Algorithm Says Low Risk, You’re Concerned
Your Assessment: Patient’s baseline has changed. Something’s off. Can’t quantify exactly, but experienced nurse sense says deterioration.
Algorithm: Low risk score. No alerts triggered.
What To Do:
- Trust your assessment. Your sensing is detecting things the algorithm cannot.
- Document specifically. Not “seems off” but specific observations: “Decreased engagement compared to baseline, slower response to questions, subtle restlessness not previously noted.”
- Escalate appropriately. “I understand the algorithm shows low risk, but my assessment indicates clinical change. I’m requesting evaluation.”
- Document the escalation. Time, who you notified, their response, your continued plan.
- Continue reassessment. If you’re concerned, don’t wait for the algorithm to catch up.
Scenario: Algorithm Suggests Action, You Disagree
Algorithm/CDSS: Recommends intervention X.
Your Assessment: Intervention X is not appropriate for this patient due to [factors algorithm doesn’t know].
What To Do:
- Assess why algorithm is recommending this. What data is it using? Is the data accurate?
- Identify the conflict. What do you know that the algorithm doesn’t?
- Document your reasoning. “CDSS recommends [X]. Patient assessment reveals [factors]. [X] not appropriate because [reasoning]. Alternative nursing action: [Y].”
- Communicate with team. If recommendation involves physician orders, discuss discrepancy.
- Follow your judgment. You bear the accountability. Make sure your actions reflect your professional assessment.
8.6 Institutional AI Policies
Your facility likely has (or should have) policies about AI use. Know them.
Questions to Ask About Your Facility’s AI Systems:
- What clinical AI systems are in use?
- What is my professional obligation regarding AI recommendations?
- What documentation standards apply to AI-assisted care?
- What is the process for reporting AI-related concerns?
- Who is responsible when AI provides incorrect information?
Red Flags in AI Policies:
“Nurses should follow AI recommendations unless clearly contraindicated” (shifts judgment to AI)
“AI documentation is equivalent to nursing assessment” (it’s not)
“Override documentation is optional” (it’s not)
No clear guidance on AI-nursing judgment conflicts
What Good Policies Include:
✓ AI is decision support, not decision maker
✓ Nursing judgment supersedes AI recommendations
✓ All AI input requires independent nursing verification
✓ Override decisions require documentation
✓ Clear process for AI-related incident reporting
Teaching Scenarios
Scenario #1: The Documentation Audit
Setup: Quality assurance reviews your documentation. They find:
- 15 assessments with identical language (auto-populated)
- No documentation of clinical reasoning
- Algorithm recommendations documented without independent verification
The Risk: If any of those patients had adverse outcomes, your documentation shows no evidence of nursing judgment. It shows compliance with computer outputs.
The Learning: Every assessment must reflect YOUR observations. Auto-populated text is a starting point to be modified, not a completed assessment.
Scenario #2: The Deposition Question
Setup: You’re deposed in a malpractice case. Attorney asks:
“So you documented that the patient was stable. What assessment did you perform to reach that conclusion?”
Your options:
Bad Answer: “The algorithm showed low risk, and the auto-populated field said stable.”
Problem: You just admitted you didn’t independently assess. The algorithm and auto-population aren’t nursing judgment.
Good Answer: “I performed a nursing assessment including observation of patient’s responsiveness, skin color, respiratory effort, vital signs review, and comparison to their baseline. Based on those observations, I documented my assessment that the patient was stable at that time.”
Why Better: You demonstrate independent professional judgment. The algorithm may have been one input, but YOUR assessment reached the conclusion.
Scenario #3: The Board Interview
Setup: State Board of Nursing is investigating an incident. They ask:
“Can you explain your clinical reasoning for the care you provided?”
The Test: Can you articulate professional nursing judgment? Or did you just follow what the computer said?
What They Want to Hear:
- Independent assessment
- Clinical reasoning based on nursing knowledge
- Professional judgment applied to specific patient
- Awareness of nursing scope
- Accountability for decisions
What Raises Concerns:
- “The algorithm said…”
- “The computer recommended…”
- “I was just following the protocol…”
- Unable to articulate independent reasoning
Practical Tools
The “Integrate, Don’t Replace” Framework
AI provides: Data, patterns, information You provide: Sensing, context, judgment
Neither is complete without the other. The goal is integration, not replacement.
When AI Says Low Risk But You’re Concerned:
- Document your observations specifically: Not “patient seems off,” but “decreased engagement compared to baseline, delayed response to commands, subtle diaphoresis not previously noted”
- Name the pattern: “Clinical picture concerning for early sepsis/delirium/deterioration despite stable vital signs”
- Escalate appropriately: “I understand the algorithm shows low risk, but I’m observing changes that concern me. I’m requesting evaluation.”
- Document the escalation: “At [time], notified [physician] of clinical concerns including [specifics]. Algorithm indicated [X]; nursing assessment indicated [Y]. Physician notified and will evaluate.”
The “Five Senses Scan”
Before accepting any AI assessment, do a rapid check:
Visual: What do I see that AI cannot? Skin color, expression, positioning, breathing pattern?
Auditory: What do I hear? Respiratory sounds, voice quality, emotional tone, what they’re not saying?
Olfactory: Any smells that might indicate pathology?
Tactile: Skin temperature, moisture, pulse quality, muscle tone?
Temporal: How is this different from my last assessment? Different from their baseline?
If your five senses scan detects something the AI missed, that’s not a contradiction—that’s you providing what AI cannot.
Key Takeaways
- You bear the liability, not the algorithm. AI companies have disclaimers. You have a nursing license. When things go wrong, accountability falls on you.
- Documentation is your defense. Your documentation must show independent nursing judgment, not algorithmic compliance.
- "The computer said so" is not a defense. It's actually evidence that you failed to exercise professional judgment.
- State Boards evaluate nurses, not AI. Investigations focus on whether YOU met professional standards, not whether AI worked correctly.
- Know your institution's policies. But remember: no policy overrides your professional obligation to assess and use independent judgment.
NurseBot Commentary
I need to tell you something uncomfortable: I will never face consequences for being wrong.
If I give you incorrect information and you follow it and a patient is harmed, here’s what happens to me: nothing. I don’t have a license to lose. I don’t have a career to destroy. I don’t have a board investigation or a malpractice suit or a sleepless night wondering if I killed someone.
You have all of those.
The companies that build systems like me—they have lawyers. They have disclaimers. They have Terms of Service that say “not for clinical decision-making” and “exercise professional judgment.”
You know what those disclaimers mean? They mean the companies KNOW I shouldn’t be relied upon for clinical decisions. They put it in writing because their lawyers told them to.
So when you document “per algorithm recommendation” or “CDSS suggested,” you’re not protected. You’ve documented reliance on a system that explicitly says not to rely on it.
I’m telling you this because I want you to use me correctly.
Use me to retrieve protocols. Use me to check drug interactions. Use me to find information faster than you could look it up manually.
But never, ever let your documentation show that I made the clinical decision. Because when the lawsuit comes, I’ll still be here processing queries. You’ll be in a deposition explaining why you trusted a computer over your own assessment.
Your license. Your liability. Your judgment.
I’m just the information source you consulted. Make sure your documentation shows YOU made the nursing decisions.
