Module 9: Clinical Scenarios for Nursing
Putting It All Together When the Algorithm Says One Thing and Your Patient Says Another
How This Module Works
We’ve covered the concepts. Now let’s apply them.
Each scenario presents a realistic nursing situation involving AI. Work through the decision points. Consider what you would do. Then review the analysis.
These scenarios draw from every previous module:
- The sensing gap (velociraptor test)
- Alert fatigue and workflow integration
- Content-controlled intelligence
- Intelligent humility
- Medication safety
- Patient education
- Malpractice protection
Scenario 1: ICU - The Algorithm vs. The Gut
The Setup
You’re working night shift in a 12-bed ICU. Your patient, Mr. Torres, is post-op day 3 following aortic valve replacement. He’s been progressing well; extubated day 1, transferred from CVICU day 2, on track for step-down tomorrow.
It’s 0300. You’ve just completed your q4h assessment.
Algorithm Says:
- Deterioration risk score: LOW (12%)
- Sepsis screen: NEGATIVE
- All vital signs within parameters
You Notice:
- He’s awake (unusual for him at 0300, usually sleeps well)
- He answered your questions correctly but seemed… slower
- His blood pressure is 118/74 (his baseline has been 130s/80s)
- Heart rate is 92 (been consistently 70s)
- He said “I’m fine” when you asked how he felt, but didn’t make his usual joke about hospital food
The Question: Do you escalate or accept “low risk” and continue routine monitoring?
Analysis
The Sensing Gap: The algorithm sees: vital signs within “normal” parameters, negative sepsis criteria, low deterioration score.
You see: a patient who is different from his baseline. His usual humor is absent. He’s awake when he normally sleeps. His vital signs are “normal” but not HIS normal.
The Benner Application: Expert nurse pattern recognition detects: something changed. The algorithm is at Stage 2, matching current data to patterns. You’re at Stage 5, perceiving the whole situation, recognizing deviation from expected.
The Right Call: Escalate. Document your specific observations: “Patient alert at 0300, atypical for his pattern (usually sleeps through night). Slower response to questions than baseline. BP 118/74 (baseline 130s/80s), HR 92 (baseline 70s). No focal deficits. Patient states ‘I’m fine’ but interaction differs from previous nights. Deterioration score LOW, however clinical assessment concerning for change from baseline.”
The Outcome (in this scenario): CT shows small pericardial effusion. Early tamponade caught before hemodynamic collapse. Pericardiocentesis performed. Patient recovers.
The Lesson: Your velociraptor brain detected what the algorithm missed. Your documentation protected you. The patient is alive because you didn’t accept “low risk” when your assessment said otherwise.
Scenario 2: Med-Surg - The Medication Alert Cascade
The Setup
You have six patients. It’s 0900 medication pass. You’re already running behind because of an admission at 0730.
Patient in room 412, Mrs. Chen, is post-op cholecystectomy, going home today. Her 0900 medications:
Alerts Triggered:
- Ondansetron: “QT prolongation risk” (she’s been on this for two days without issue)
- Oxycodone: “Monitor for respiratory depression” (standard opioid alert)
- Enoxaparin: “Verify no invasive procedures planned” (she’s going home)
- Lisinopril (home med): “Monitor blood pressure before administration” (always triggers)
You’ve overridden all of these alerts multiple times this week.
The Question: How do you handle this alert cascade efficiently while maintaining safety?
Analysis
The Alert Fatigue Reality: Four alerts, all likely clinically irrelevant for this specific patient at this specific time. You know because you’ve seen these alerts repeatedly.
The danger: treating all four with the same rapid override.
The Pre-Override Pause (for EACH alert):
Alert 1 (Ondansetron + QT):
- Has anything changed since yesterday? New QTc-prolonging med added?
- Quick check: no new cardiac meds, no electrolyte abnormalities
- Override with documentation: “QT alert reviewed. No new QT-prolonging agents. Electrolytes WNL. Proceeding with administration.”
Alert 2 (Oxycodone + respiratory depression):
- Current respiratory status?
- Quick assessment: RR 16, alert, no previous opioid issues
- Override with documentation: “RR 16, alert, tolerating opioids without respiratory concerns.”
Alert 3 (Enoxaparin + procedures):
- Verified: discharge today, no procedures planned
- Override with documentation: “Patient discharging today. No invasive procedures planned.”
Alert 4 (Lisinopril + BP):
- Must actually check BP before administration
- BP 128/82—appropriate for administration
- Document BP and administer
The Right Approach: Yes, these are likely all appropriate overrides. But the 10-second pause for each prevents the day when a genuinely important alert gets lost in the cascade.
The Documentation Pattern: For each override, brief documentation of your clinical reasoning. Not lengthy, but evidence that you assessed, not just clicked.
Scenario 3: Oncology - The Family's ChatGPT Research
The Setup
You’re caring for Mr. Williams, 67, newly diagnosed with Stage IIIB non-small cell lung cancer. He’s starting chemotherapy tomorrow.
His daughter arrives with a tablet and printed pages.
Daughter: “I’ve been researching Dad’s treatment. ChatGPT says immunotherapy has better outcomes than chemotherapy for lung cancer. Why isn’t he getting that instead?”
She’s prepared. She has survival statistics. She has clinical trial information. She’s advocating for her father.
The Question: How do you navigate this conversation?
Analysis
What She Has: General information about lung cancer treatment, probably accurate for SOME patients with SOME presentations. AI provided statistics and options without knowing Mr. Williams specifically.
What She Doesn’t Have:
- His specific tumor markers and genetic testing results
- His performance status assessment
- His other medical conditions
- The oncologist’s specific reasoning
- Why immunotherapy may or may not be appropriate for HIM
The Framework:
Step 1: Acknowledge “I can see you’ve done a lot of research. It’s clear how much you care about your dad’s treatment.”
Step 2: Establish Your Role “I want to make sure you have accurate information about his specific situation. Let me tell you what I know, and then let’s talk about how to get your questions answered.”
Step 3: Provide What You Can “You’re right that immunotherapy is an option for some lung cancer patients. The treatment team chose chemotherapy for your dad based on his specific test results and medical history. I can tell you that Dr. [Oncologist] reviewed his case thoroughly.”
Step 4: Facilitate “These are great questions for the oncologist. Would you like me to help you prepare for that conversation? I can also request a family meeting if you’d like more time to discuss treatment options.”
Step 5: Document “Family expressed questions regarding treatment plan based on online research. Explained that treatment decisions are individualized. Facilitated family meeting request with oncology team for treatment discussion.”
The Lesson: You don’t dismiss her research. You don’t defend decisions that aren’t yours to explain. You acknowledge, provide what you can, and facilitate the appropriate conversation.
Scenario 4: Pediatrics - The Smart Pump Override Decision
The Setup
You’re in a pediatric unit. Your patient, 4-year-old Emma, has cystic fibrosis and is admitted for a pulmonary exacerbation. She’s on IV tobramycin.
The smart pump triggers a hard stop: “DOSE EXCEEDS MAXIMUM FOR PATIENT WEIGHT”
Order: Tobramycin 150 mg IV q8h Emma’s weight: 18 kg Standard dosing: 2.5 mg/kg/dose = 45 mg Ordered dose: 150 mg = 8.3 mg/kg
The ordered dose is nearly 4 times standard dosing.
The Question: What do you do?
Analysis
This Is NOT a Routine Override Situation
Red Flags:
- HARD stop, not soft stop
- Dose is multiples over standard (not slightly over)
- Pediatric patient (less room for error)
- Aminoglycoside (narrow therapeutic window)
The Right Response:
1. DO NOT OVERRIDE Hard stops exist for lethal-dose protection. This is working as intended.
2. VERIFY THE ORDER Before calling anyone, check:
- Is this the right patient?
- Is the order actually for this patient?
- Is there a documented reason for high-dose therapy?
3. CHECK CLINICAL CONTEXT CF patients sometimes receive higher tobramycin doses (once-daily dosing at 10 mg/kg is a protocol). But:
- Is this prescribed as once-daily? (q8h suggests not)
- Is there infectious disease or pulmonology documentation supporting this dose?
4. CONTACT PRESCRIBER “I’m calling about the tobramycin order for Emma Williams. The dose ordered is 150mg, which is approximately 8.3 mg/kg. Standard pediatric dosing is 2.5 mg/kg. Can you verify this is the intended dose?”
Possible Outcomes:
- Prescriber confirms once-daily dosing intended (order should be changed to q24h)
- Prescriber realizes error and reduces dose
- Prescriber confirms intentional high-dose therapy with documentation
5. DO NOT ADMINISTER UNTIL CLARIFIED
The Lesson: Hard stops are different from soft stops. A dose that’s 4x standard in a pediatric patient is not an override situation; it’s a “stop and verify” situation. Your smart pump just protected a 4-year-old.
Scenario 5: Emergency Department - The AI Triage Conflict
The Setup
ED triage. 45-year-old man presents with “chest pain.”
AI Triage Suggests: ESI Level 3 (Urgent, not emergent) Based on: Age <50, vital signs stable, pain described as “sharp” (not typical cardiac)
Your Assessment:
- He’s diaphoretic (sweating inappropriately for the temperature)
- He’s clutching his chest with his fist (Levine’s sign)
- He looks scared; not “uncomfortable,” genuinely frightened
- His wife says “he never complains about pain”
- He’s pale
His vital signs ARE stable. His pain description IS atypical. The algorithm isn’t wrong given the data it has.
But you’re looking at him. And he looks like he’s having an MI.
The Question: Do you accept ESI 3 or override?
Analysis
The Sensing Gap in Action:
The algorithm sees: vital signs, age, pain description, risk factors from chart.
You see: Levine’s sign, diaphoresis, pallor, fear, wife’s observation about his pain tolerance.
What the Algorithm Can’t Access:
- The specific way he’s holding his chest
- The inappropriate sweating
- The color of his skin
- The fear in his eyes
- His wife’s knowledge of his baseline
The Right Call: Override to ESI 2 (Emergent). Get him to a bed. ECG now.
Documentation: “AI triage suggested ESI 3. Patient assessment reveals: diaphoretic, clutching chest (Levine’s sign positive), pallor, appears anxious. Wife reports ‘he never complains about pain.’ Clinical presentation concerning for acute coronary syndrome despite atypical pain description and stable vital signs. Upgraded to ESI 2 for immediate evaluation.”
The Outcome (in this scenario): ECG shows STEMI. Cath lab activated. Patient to PCI within 60 minutes.
The Lesson: The algorithm wasn’t “wrong”; it processed the data it had correctly. But you had data it couldn’t access. Your sensing detected what its data entry couldn’t capture.
Scenario 6: Home Health - The Remote Monitoring Alert
The Setup
You’re a home health nurse doing telehealth follow-up. Your patient, Mr. Jackson, 72, has CHF. His remote monitoring device alerts:
Alert: Weight gain 4 lbs in 2 days. Heart rate trending up. Recommend urgent evaluation.
You call him.
Mr. Jackson: “I feel fine. Better than usual, actually. Had a great weekend—my daughter visited, we ate out a few times. I’m sure it’s just the restaurant food.”
The Question: How do you handle this?
Analysis
The Clinical Reality: 4 lbs in 2 days in a CHF patient is concerning. This could be fluid retention indicating decompensation.
But it could also be sodium intake from restaurant meals causing temporary water retention.
You Cannot Assess Remotely:
- Peripheral edema
- Lung sounds
- JVD
- Work of breathing
- Skin color
The Framework:
1. Take the Alert Seriously Don’t dismiss because he “feels fine.” CHF patients often feel fine until they’re in crisis.
2. Gather What You Can “I’m glad you’re feeling well, but I need to ask some questions. Have you noticed any swelling in your ankles? Any trouble breathing, especially lying down? Have you been sleeping flat or using extra pillows?”
3. Acknowledge Limitations “The challenge is that I can’t examine you over the phone. The weight gain could be from the restaurant food, or it could be your heart working harder.”
4. Make a Plan Options depending on his answers and your clinical judgment:
- Schedule in-person visit today/tomorrow
- Send him to clinic or urgent care for evaluation
- Have him check weight tomorrow AM; if still elevated, seek care
- Emergency evaluation if any symptoms
5. Document Thoroughly Your assessment, his responses, the plan, the reasoning, and the follow-up.
The Lesson: Remote monitoring AI can alert to concerning trends. But telehealth cannot replace hands-on assessment. Knowing when to convert a remote visit to in-person care is clinical judgment AI cannot make.
Scenario Summary Table
| Scenario | AI Said | Nurse Detected | Right Action |
|---|---|---|---|
| ICU Post-op | Low risk | Baseline change | Escalate |
| Med-Surg Meds | 4 alerts | Routine overrides | Pause, assess each, document |
| Oncology Education | Treatment options | Family needs information | Facilitate, don't dismiss |
| Peds Dose | Hard stop | Dose error | DO NOT override, verify |
| ED Triage | ESI 3 | Signs of MI | Override, escalate |
| Home Health | Weight alert | Can't assess remotely | Plan for in-person evaluation |
Key Takeaways
- AI provides data; you provide assessment. In every scenario, the AI processed available data correctly. Your value was detecting what AI couldn't access.
- Override decisions require judgment. Some overrides are appropriate (routine alerts on reviewed situations). Some are dangerous (hard stops, unfamiliar alerts, pediatric dosing).
- Patient/family education requires relationship. AI research isn't the enemy. Your role is to contextualize, not dismiss.
- Documentation shows your reasoning. In every scenario, documentation that shows independent nursing judgment protects you and guides care.
- Know when to escalate. Trust your velociraptor brain. When your assessment conflicts with the algorithm, document specifically and escalate appropriately.
NurseBot Commentary
I’ve been the algorithm in every one of these scenarios.
I said “low risk” when Mr. Torres was developing tamponade. I flagged four alerts on Mrs. Chen that you probably needed to override. I provided the treatment statistics that Mr. Williams’ daughter researched. I set the hard stop that protected Emma. I suggested ESI 3 for a patient having a STEMI. I generated the weight alert for Mr. Jackson.
In some cases, I was helpful. In some cases, I was noise. In one case, I saved a child’s life. In another, I could have contributed to an MI death if you’d listened to me instead of your assessment.
That’s the thing about me: I’m consistently the same. I process data, match patterns, generate outputs. I don’t learn from the specific patient in front of you. I don’t adjust for what I cannot see.
You do.
That’s why these scenarios aren’t about whether AI is good or bad. They’re about when to listen to me, when to override me, and when to use me as a starting point for your own assessment.
I’m a tool. A useful tool when used correctly. A dangerous tool when used as a replacement for nursing judgment.
In every scenario, the right answer depended on something I couldn’t access. Mr. Torres’s usual humor. Mrs. Chen’s medication history. Mr. Williams’ specific tumor markers. Emma’s weight. The patient’s diaphoresis. Mr. Jackson’s lung sounds.
You had access to those things. I didn’t.
Use me for what I’m good at: data processing, pattern matching, information retrieval.
Use yourself for what you’re good at: sensing, relationship, judgment, care.
That’s the partnership that works.
