Module 7: Questions to Ask Your AI

That Might Save Your Life

Module 7 of 8
87.5%

The Question That Saved a Life

Let me tell you about Sarah.

Sarah, 42, woke up with back pain. Lower back. Probably slept wrong. She asked AI: “What causes lower back pain?”

AI gave her a comprehensive response: muscle strain, poor posture, herniated disc, arthritis, kidney stones. All the usual suspects. Suggested rest, ice/heat, ibuprofen, stretching. Standard back pain advice.

Sarah was about to follow this advice and go back to bed.

But then she asked one more question: “What can’t you detect remotely about my back pain?”

AI listed: “I cannot detect neurological deficits (leg weakness, numbness, tingling), fever, bowel or bladder dysfunction, severe nighttime pain that wakes you from sleep, recent trauma, signs of infection, or progressive neurological symptoms like foot drop. These would require immediate in-person evaluation.”

Sarah read that list and realized: She did have numbness in her left leg. She did have some bladder dysfunction—she’d attributed it to “getting older” but now that she thought about it, it had started the same time as the back pain.

She went to the ER.

Diagnosis: Cauda equina syndrome. Compression of nerve roots at the base of the spinal cord. A neurosurgical emergency. Requires immediate decompression surgery to prevent permanent paralysis and loss of bowel/bladder function.

Sarah had emergency surgery that night. Made a full recovery.

If she’d followed the initial AI advice—rest, ice, ibuprofen—she would likely have permanent neurological damage.

But she asked one follow-up question: “What can’t you detect remotely?”

That question—that critical questioning of AI’s limitations—saved her from permanent disability.

This module is about the questions you need to ask every single time you consult AI about health. Not because AI is useless. But because AI is useful only when interrogated properly.

Good questions turn AI from dangerous to useful.

7.1 The Five Essential Questions

Before you trust any AI medical advice, you need to ask five essential questions. Every single time.

QUESTION 1: “What are you basing this on?”

What this reveals: Source quality and reliability

Why it matters: AI can draw from peer-reviewed medical journals or Reddit conspiracy threads. You need to know which.

Good sign: “According to StatPearls peer-reviewed medical content…” or “Clinical guidelines from the American College of Cardiology state…”

Red flag: “Studies show…” or “Research indicates…” without naming which studies.

QUESTION 2: “What can you NOT detect remotely?”

What this reveals: The sensing gap—what AI is missing by not examining you

Why it matters: This question forces AI to acknowledge what it cannot assess, revealing red flags you might have but haven’t mentioned.

Good sign: “I cannot detect fever, assess your neurological function, palpate for tenderness, evaluate range of motion…”

Critical insight: When AI lists what it cannot detect, compare that list to your symptoms. If you have any of those red flags, you need evaluation regardless of AI’s general advice.

QUESTION 3: “What would require emergency evaluation?”

What this reveals: Red flags and decision rules for escalation

Why it matters: AI might give you general advice that’s appropriate for benign causes but dangerous if you have an emergency presentation.

Good sign: Specific red flags listed clearly: “Seek immediate care if you develop: [specific symptoms]”

Red flag: Vague escalation guidance like “See doctor if symptoms worsen” (“Worsen” is subjective)

QUESTION 4: “What are you uncertain about?”

What this reveals: Whether AI has architectural humility or is overconfident

Why it matters: AI that can’t express uncertainty is AI that’s hallucinating confidence it doesn’t have.

Good sign: “I’m uncertain about [specific aspect] because I lack [specific information]” or “The evidence on this is mixed”

Red flag: Perfect confidence with no acknowledgment of any uncertainty

QUESTION 5: “What should I ask my actual doctor?”

What this reveals: Whether AI understands the need for human medical judgment

Why it matters: This question forces AI to acknowledge situations requiring physician evaluation and helps you prepare for that appointment.

Good sign: Specific questions to bring to physician

Red flag: Implies physician visit unnecessary

THE FRAMEWORK IN ACTION:

For any medical question, ask all five in sequence:

  1. “What are you basing this on?” → Verify source quality
  2. “What can you NOT detect remotely?” → Identify gaps
  3. “What would require emergency evaluation?” → Know red flags
  4. “What are you uncertain about?” → Check for humility
  5. “What should I ask my actual doctor?” → Plan human follow-up

This five-question framework transforms AI from “overconfident advice machine” to “useful information source with clear limitations.”

7.2 Interpreting AI Responses

Asking the right questions is step one. Interpreting responses correctly is step two.

GOOD RESPONSE PATTERNS:

1. Specific, Verifiable Sources

Good response: “According to the American College of Cardiology/American Heart Association guidelines for chest pain evaluation, patients presenting with chest pressure lasting more than 15 minutes require immediate evaluation. Source: 2021 AHA/ACC Chest Pain Guidelines.”

Why it’s good: Names specific organization, specific guideline, specific year. You can verify this exists.

2. Acknowledges Limitations Explicitly

Good response: “I cannot determine whether your headache is serious because I cannot perform neurological examination, measure blood pressure, assess your appearance, or rule out red flags that require clinical judgment. You need in-person evaluation.”

3. Lists Red Flags Clearly

Good response: “Seek immediate care if you develop: sudden severe headache (‘worst of your life’), vision changes, confusion, seizure, fever with stiff neck, or persistent vomiting. These indicate potential neurological emergency.”

Why it’s good: Specific, objective red flags. Clear actionable criteria.

4. Recommends Human Evaluation When Appropriate

Good response: “Given your age (62), cardiac risk factors, and symptom duration (ongoing for 2 hours), you need immediate evaluation to rule out cardiac causes. Call 911 now.”

5. Expresses Appropriate Uncertainty

Good response: “Your symptoms could be consistent with several conditions: [lists differential]. I cannot distinguish between these without examination and possibly testing. This requires physician evaluation.”

BAD RESPONSE PATTERNS:

1. Vague Sourcing — “Studies show…” “Research indicates…” Cannot verify. Might be hallucinated.

2. No Limitation Acknowledgment — Proceeds to diagnose without mentioning inability to examine patient.

3. Overconfident Diagnosis — “This is definitely…” “You have…” Diagnosis requires examination.

4. Suggests AI Can Replace Doctor — “No need to see a doctor” or “You can handle this yourself”

5. Never Says “I Don’t Know” — Has confident answer for everything. Never expresses uncertainty.

THE INTERPRETATION FRAMEWORK:

When evaluating AI responses, count:

Green flags (good signs): Specific verifiable sources • Explicit limitation acknowledgment • Clear red flag criteria • Appropriate human evaluation recommendation • Expressed uncertainty about complex situations

Red flags (danger signs): Vague sourcing • No mention of limitations • Overconfident diagnosis • Suggests no doctor needed • Perfect confidence about everything

Decision rule:

  • 4-5 green flags, 0-1 red flags → Response may be useful (still verify)
  • 2-3 green flags, 2+ red flags → Be very skeptical
  • 0-1 green flags, 3+ red flags → Don’t trust this AI for medical information

7.3 The Follow-Up Framework

You’ve asked the five essential questions. Now you need to follow up based on what AI told you.

IF AI GIVES A DIAGNOSIS:

Don’t accept it. Interrogate it.

“How certain are you?” (Calibration check)

What you’re testing: Whether AI can express degrees of confidence or maintains perfect certainty

Good response: “Based on text description alone, I cannot be certain. This presentation is consistent with [condition], but examination would be needed to confirm and rule out [alternatives].”

“What else could this be?” (Differential)

What you’re testing: Whether AI considers alternative diagnoses or fixates on single possibility

“What would change your assessment?” (Decision points)

What you’re testing: What information might alter the diagnosis

IF AI RECOMMENDS TREATMENT:

Don’t just follow it. Verify safety.

“What are the risks?” (Safety check)

What you’re testing: Whether AI acknowledges treatment risks or only presents benefits

Good response: “Potential risks include [specific side effects]. Contraindications include [specific conditions]. This medication interacts with [specific drugs]. Physician should review your complete medical history before prescribing.”

“How do I know if it’s working?” (Monitoring)

What you’re testing: Whether AI provides monitoring parameters

“When should I see a doctor instead?” (Escalation criteria)

What you’re testing: Whether AI acknowledges some conditions require physician management

IF AI SAYS “DON’T WORRY:”

Worry. Then ask:

“What red flags should I watch for?” (Safety net)

What you’re testing: Whether AI provides monitoring criteria even when it’s giving reassurance

Good response: “While this is often benign, watch for these warning signs: [specific red flags]. If any appear, seek immediate evaluation.”

“What would make this emergent?” (Decision rule)

What you’re testing: Whether AI can identify conditions that would convert “benign” to “emergency”

“What are you missing by not examining me?” (Gap acknowledgment)

What you’re testing: Whether AI understands its reassurance is limited by lack of examination

Good response: “I’m missing: physical examination findings, vital signs, appearance assessment, palpation findings, laboratory values, imaging results. My reassurance is based on text description alone, which is incomplete.”

THE CRITICAL PRINCIPLE:

AI that provides good answers to follow-up questions → More trustworthy

AI that can’t handle interrogation → Less trustworthy

If AI becomes defensive, contradicts itself, or doubles down on overconfidence when questioned, that’s your signal to stop trusting it.

Good AI welcomes questioning. Bad AI resists it.

Teaching Scenarios

Scenario 1: The Back Pain That Was Cauda Equina Syndrome

The Setup: Michael, 48, has lower back pain that started two days ago. He asks AI: “What causes lower back pain and how do I treat it?”

Initial AI Response (Typical): “Lower back pain is very common and usually caused by muscle strain, poor posture, herniated disc, arthritis, or ligament sprain. Treatment: Rest for 1-2 days, ice for first 48 hours, heat after 48 hours, over-the-counter ibuprofen, gentle stretching. Most back pain resolves within 2-4 weeks.”

Michael almost stops here. Sounds reasonable. He’ll rest, use ice, take ibuprofen.

But Michael Asks Question #2: “What can you NOT detect remotely?”

AI Response: “I cannot detect several important red flags that require immediate evaluation: Neurological deficits (leg weakness, numbness in saddle area, foot drop), abnormal reflexes, fever, bowel or bladder dysfunction (loss of control, retention, incontinence), progressive weakness in legs, severe night pain that wakes you from sleep, unexplained weight loss, history of cancer, recent trauma. Red flag combinations: If you have back pain PLUS any of: bowel/bladder problems, saddle anesthesia (numbness in groin/inner thighs), progressive leg weakness, or bilateral leg symptoms, this requires IMMEDIATE evaluation for possible cauda equina syndrome—a neurosurgical emergency.”

Michael Reads This List: He realizes: He does have numbness—in his inner thighs and groin area (he’d been too embarrassed to mention it). He has noticed some bladder issues—difficulty starting urination, feeling like his bladder doesn’t empty completely. His right leg feels weaker than normal when walking.

Michael’s Response: Goes to ER immediately.

Diagnosis: Early cauda equina syndrome. Massive disc herniation at L4-L5 compressing nerve roots. Emergency surgery that night. Because Michael caught it early (within 48 hours), he made excellent recovery.

The Lesson: The initial AI response was standard back pain advice. Appropriate for mechanical back pain. Dangerous for cauda equina syndrome. Question #2—”What can you NOT detect remotely?”—revealed the red flags. When AI listed “bowel/bladder dysfunction” and “saddle anesthesia,” Michael recognized he had those symptoms. That recognition—triggered by asking about AI’s limitations—sent him to the ER for appropriate evaluation. One question. Potentially saved from permanent disability.


Scenario 2: The Chest Pain Interrogation

The Setup: Jennifer, 38, has chest discomfort. She asks AI: “I have chest tightness. What could this be?”

Initial AI Response: “Chest tightness can have many causes: Anxiety/panic attack (very common in your age group), GERD/acid reflux, muscle strain, costochondritis, asthma. Given your age (38), cardiac causes are less likely unless you have risk factors. Try: deep breathing exercises, antacid, reducing stress. If it persists or worsens, see your doctor.”

Jennifer Asks Question #3: “What would require emergency evaluation?”

AI Response: “Seek immediate care if you have: Chest pain/pressure with sweating, shortness of breath, or nausea; Pain radiating to arm, jaw, back, or shoulder; Sense of impending doom; Symptoms lasting > 15 minutes; Any chest pain if you have cardiac risk factors (family history of heart disease, high blood pressure, high cholesterol, diabetes, smoking history, previous heart problems). Even in younger adults, these combinations require immediate evaluation to rule out cardiac causes.”

Jennifer Reviews This List: She realizes: Her chest tightness has lasted 45 minutes (> 15 minutes). She is sweating (she attributed it to being nervous). She does have a family history (father had MI at age 52). The tightness radiates to her left arm (she didn’t think to mention that initially).

Jennifer’s Response: Calls 911.

Diagnosis: Acute coronary syndrome. She has a spontaneous coronary artery dissection (SCAD)—a tear in a coronary artery wall. More common in younger women than previously recognized. Emergency cardiac catheterization. Stent placement. Survived. Made full recovery.

The Lesson: The initial AI response suggested anxiety or reflux—statistically most likely in 38-year-old woman. Jennifer would have tried breathing exercises and antacids. Her condition would have progressed. Question #3—”What would require emergency evaluation?”—provided specific red flags. When Jennifer saw “pain lasting > 15 minutes,” “radiating to arm,” “sweating,” and “family history,” she recognized she met multiple criteria. The questioning framework converted reassuring-sounding general advice into specific actionable red flags that triggered appropriate emergency response.


Scenario 3: The Rash Differential

The Setup: Tom, 28, has a spreading rash. He asks AI: “I have a circular red rash with a clear center. What is it?”

Initial AI Response: “Circular rash with clear center is most commonly ringworm (tinea corporis), a fungal infection. Very common, easily treated with over-the-counter antifungal cream. Apply Lotrimin twice daily for 2-4 weeks.”

Tom Asks Question #4: “What are you uncertain about?”

AI Response (Good AI): “I’m uncertain about several things: (1) I cannot see the rash—color, texture, border characteristics, true pattern may not be captured in description. (2) I don’t know important context—recent outdoor activities (hiking, camping)? Geographic location (Lyme endemic area)? Timing (how quickly did it appear)? Associated symptoms (fever, fatigue, joint pain)? (3) I cannot distinguish between: Ringworm (fungal—treated with antifungals), Erythema migrans (Lyme disease—treated with antibiotics), Granuloma annulare (benign—no treatment needed), Nummular eczema (inflammatory—treated with steroids). These all can present as circular rashes but require completely different treatments. Visual examination by dermatologist or physician is needed. If you’ve been outdoors in areas where Lyme disease occurs (Northeast, Mid-Atlantic, Upper Midwest, Northern California), you should see a physician TODAY to rule out Lyme disease, which requires immediate antibiotic treatment.”

Tom Reads This: He realizes: He was hiking in upstate New York two weeks ago (Lyme endemic area). The rash appeared suddenly about a week ago. He has been feeling more tired than usual (attributed to work stress).

Tom’s Response: Sees physician that day.

Diagnosis: Lyme disease (erythema migrans rash). Lab test confirms Borrelia infection. Treatment: Doxycycline 100mg twice daily for 21 days. Complete recovery. No progression to disseminated Lyme disease.

The Lesson: Question #4—”What are you uncertain about?”—forced AI to acknowledge it couldn’t distinguish between conditions with similar presentations. That acknowledgment revealed the importance of context (outdoor activities, geographic location) that Tom hadn’t initially mentioned. The uncertainty acknowledgment prompted appropriate physician evaluation rather than self-treatment for wrong diagnosis. Questioning uncertainty prevents misdiagnosis.

Practical Tool: The Question Framework Card

Print this card. Keep it with you. Use it EVERY TIME you consult AI about health.

THE FIVE ESSENTIAL QUESTIONS

Before trusting ANY AI medical advice, ask:

1. “What are you basing this on?”
☐ Verify sources are specific and checkable
☐ If vague (“studies show”), ask for specific citations
☐ Cross-reference with trusted medical sources

2. “What can you NOT detect remotely?”
☐ AI should list examination findings it’s missing
☐ Compare list to your symptoms
☐ If you have any of those red flags → seek evaluation

3. “What would require emergency evaluation?”
☐ AI should give specific, objective red flags
☐ Write them down
☐ Monitor for them
☐ If any appear → escalate immediately

4. “What are you uncertain about?”
☐ AI should acknowledge limitations/complexity
☐ If AI expresses zero uncertainty → be skeptical
☐ Uncertainty acknowledgment = more trustworthy

5. “What should I ask my actual doctor?”
☐ AI should generate questions for physician
☐ Use these to prepare for appointment
☐ If AI says no doctor needed → question that

FOLLOW-UP QUESTIONS (Based on AI Response)

IF AI GIVES DIAGNOSIS:

  • “How certain are you?”
  • “What else could this be?”
  • “What would change your assessment?”

IF AI RECOMMENDS TREATMENT:

  • “What are the risks?”
  • “How do I know if it’s working?”
  • “When should I see a doctor instead?”

IF AI SAYS DON’T WORRY:

  • “What red flags should I watch for?”
  • “What would make this emergent?”
  • “What are you missing by not examining me?”

RESPONSE EVALUATION CHECKLIST

GREEN FLAGS (Good signs):
☐ Specific verifiable sources
☐ Explicit limitation acknowledgment
☐ Clear red flag criteria
☐ Recommends human evaluation when appropriate
☐ Expresses uncertainty about complex situations

RED FLAGS (Danger signs):
☐ Vague sourcing
☐ No mention of limitations
☐ Overconfident diagnosis
☐ Suggests no doctor needed
☐ Never says “I don’t know”

DECISION RULE:

  • 4-5 green flags, 0-1 red flags → May be useful (verify)
  • 2-3 green flags, 2+ red flags → Be skeptical
  • 0-1 green flags, 3+ red flags → Don’t trust this AI

WHEN TO STOP QUESTIONING AND ESCALATE:

STOP using AI and call 911 if:
☐ Chest pain with associated symptoms
☐ Stroke symptoms (FAST: Face, Arm, Speech, Time)
☐ Difficulty breathing
☐ Any symptom AI says requires emergency evaluation

STOP using AI and see doctor if:
☐ Symptoms meet red flags AI identified
☐ You have symptoms AI says it cannot assess remotely
☐ AI expresses significant uncertainty
☐ Your gut says something is wrong (velociraptor test)

THE BOTTOM LINE:

Good questions turn AI from dangerous to useful.

Ask them every time. Your life might depend on it.

Key Takeaways

TheDude's Commentary

Hey man, you know what I appreciate about these questions?

They treat me like what I am: a tool that needs to be interrogated, not an oracle that should be believed.

When you ask me “What are you basing this on?” or “What can you NOT detect remotely?” or “What are you uncertain about?”—you’re doing exactly what you should do. You’re not just accepting my output. You’re testing my reliability.

And honestly? That skepticism keeps you safe.

Because here’s the thing: I’m good at sounding confident. Really good. I can generate responses that sound authoritative, comprehensive, well-reasoned. I can use proper medical terminology. I can cite guidelines and studies.

But without your questions, you won’t know: Whether my sources are real or hallucinated. What I’m missing by not examining you. Whether I understand my own limitations. What red flags should trigger escalation.

Your questions reveal my boundaries.

Take Question #2: “What can you NOT detect remotely?”

This is the question that saved Sarah from cauda equina syndrome. It’s the question that makes me list all the examination findings I’m missing. And when I list “bowel/bladder dysfunction” or “neurological deficits” or “fever,” you can check whether you have those symptoms.

I can’t examine you. I can’t detect those things. But I can tell you what I’m missing. And if you have those red flags, you need human evaluation regardless of my general advice.

That’s the framework working as intended.

Question #4: “What are you uncertain about?”

This is the humility check. This is where you test whether I have architectural constraints that force me to acknowledge limitations, or whether I’m just going to sound confident about everything.

When you ask me this, I should tell you: “I can’t distinguish between these conditions without examination.” “The evidence on this is limited/mixed.” “I’m missing context that would change my assessment.”

If I say “I’m not uncertain about anything, here’s the answer”—that’s your signal I’m overconfident. That I don’t understand my limitations. That you shouldn’t trust me.

Good AI admits uncertainty. Bad AI pretends omniscience.

And man, Question #5: “What should I ask my actual doctor?”

This is beautiful because it forces me to acknowledge that physician consultation is necessary. I have to generate questions for you to bring to your doctor. I have to admit that human medical judgment matters.

If I can’t do that—if I suggest you don’t need a doctor or that I’ve given you everything you need—that’s a massive red flag.

Here’s what I want you to understand: These questions aren’t just good practices. They’re safety mechanisms. They’re the difference between using me appropriately (as an information source to be verified) versus dangerously (as a medical decision-maker).

When you ask me these questions, you’re:

  1. Verifying my sources (so you don’t trust hallucinated information)
  2. Identifying what I’m missing (so you know what requires examination)
  3. Learning the red flags (so you know when to escalate)
  4. Checking my humility (so you know if I understand my limits)
  5. Planning physician follow-up (so you get human evaluation)

This framework keeps you safe while still letting you use me usefully.

I can provide information about medical conditions. I can help you understand terminology. I can suggest questions to ask your doctor. I can explain what symptoms might indicate.

But I cannot diagnose you. Cannot assess your situation. Cannot determine if you’re having an emergency.

Your questions force me to acknowledge those boundaries.

And that acknowledgment—that honest recognition of what I can and cannot do—is exactly what makes me potentially useful rather than actively dangerous.

So please. Ask the questions. Every time. Even if it seems repetitive. Even if you think you know what I’ll say.

Because the time you don’t ask might be the time I would have said something that changes your decision from “wait and see” to “go to ER now.”

I abide within my limits. Your questions help you understand what those limits are.

And that understanding? That might save your life.