Module 1: Why You're Here (And Why AI Can't Replace Your Doctor)
How I Learned to Stop Worrying and Accept That Typing Your Symptoms Into a Computer Is Both Completely Rational and Completely Insufficient
The 2 AM Decision Tree
It’s 2 AM on a Tuesday. You wake up with chest pain.
Not terrible chest pain. Not “call 911” chest pain. Just… chest pain. Sharp. Left side. Worse when you breathe deep.
Here are your options:
Option A: Drive to the ER. Wait four hours. Get a $3,200 bill for someone to tell you it’s costochondritis (inflammation of rib cartilage) and recommend ibuprofen.
Option B: Wait until morning to call your doctor. Get an appointment in three weeks. Wonder if you’ll still be alive in three weeks. (You will be. It’s costochondritis. But you don’t know that yet.)
Option C: Open your phone. Type your symptoms into an AI. Get an instant answer. Free. Private. No judgment about whether this is a “real” emergency or not.
If you’re reading this, you chose Option C. Or you’re thinking about it.
And honestly? I don’t blame you.
I’m a surgeon. I’ve spent 25 years cutting people open and putting them back together. I run an AI company that builds medical AI systems. And I’m here to tell you something that might surprise you:
Using AI for health questions isn’t stupid. It’s completely rational given how broken our healthcare system is.
But here’s the thing: AI can give you information. It cannot give you an examination. And the difference between those two things is sometimes the difference between life and death.
1.1 The Broken System
Let me be clear about something right up front: The fact that you’re Googling your symptoms at 2 AM is not a moral failing. It’s a rational response to a healthcare system that has failed you.
Consider the obstacles:
You can’t get appointments. The average wait time to see a primary care physician in the United States is 26 days. Twenty-six days. If you wake up with chest pain, “let me schedule you for three weeks from Thursday” is not a medically appropriate response.
ERs are expensive and often unnecessary. That $3,200 bill for costochondritis? Real number from a real patient. Emergency departments are designed for emergencies. Using them for “I have a weird rash” is like using a fire hose to water a houseplant, technically functional, financially catastrophic.
Urgent care has limited hours and limited capabilities. Great for strep throat. Less great for “I found a lump” or “my vision is blurry” or the thousand other things that fall into the medical gray zone between “fine” and “emergency.”
So you turn to AI. Of course you do. It’s free. It’s instant. It doesn’t judge you for being worried about something that might be nothing.
This is not your fault.
1.2 What AI Actually Is
Here’s what AI is, technically: AI is pattern recognition trained on text.
That’s it. That’s the whole thing.
It’s read millions of medical articles, textbooks, patient forums, and WebMD pages. It’s learned patterns: “chest pain + shortness of breath + sweating = possibly heart attack” or “fever + cough + body aches = possibly flu.”
These patterns are real. The correlations exist. AI didn’t make them up.
But here’s what AI is not: AI is not examining you.
It can’t see you. Can’t touch you. Can’t smell you. Can’t hear the subtle tremor in your voice when you’re scared, or notice the way you’re favoring your left leg when you walk, or detect the faint yellow tinge in your eyes that might indicate liver problems.
It’s reading your words. Not assessing your body.
AI is Scenario A. It’s only Scenario A. It can never be Scenario B.
1.3 The Sensing Gap
Here’s a number I want you to remember: 10 billion.
That’s how many sensory neurons you have. Ten billion little sensors constantly sampling your environment, detecting threats, monitoring your body’s status, picking up subtle cues that something is wrong.
When you walk into a doctor’s office, those 10 billion sensors are already working. Your pupils dilate or constrict in response to light. Your skin flushes or pales based on blood flow. Your gait changes if something hurts. You sweat when you’re anxious or feverish.
A good doctor doesn’t just listen to your words. They’re watching, touching, smelling, listening. They’re using their 10 billion sensors to detect signals from your 10 billion sensors.
AI has zero sensors. Zero.
It’s blind. Deaf. Has no sense of touch or smell. Cannot detect your body temperature, your respiratory rate, your skin color, your level of consciousness, your affect, your pain response to palpation.
This is what I mean when I say AI reads your words, not your biology.
Teaching Scenarios
Scenario #1: The Melanoma Miss
The Setup: Sarah, 34, notices a mole on her back that looks different. She can’t get a dermatology appointment for three months. She takes a photo, uploads it to an AI skin cancer detection app.
What the AI Analyzes: The image. Pixel patterns. Color distribution. Asymmetry. Border irregularity.
What the AI Concludes: “Low risk. Benign nevus. Monitor for changes.”
What the AI Doesn’t Detect:
- The mole is slightly raised (can’t assess through photo)
- It’s firm to touch (no tactile information)
- It’s grown 2mm in six weeks (Sarah didn’t mention timeline)
- It occasionally bleeds (she thought that was from scratching)
- She has a family history of melanoma (didn’t think to mention it)
What Happened: Sarah waited. The mole kept growing. Eight months later, she finally got a dermatology appointment. Biopsy confirmed melanoma. It had progressed from Stage 1A to Stage 2B.
The Lesson: Image analysis AI can detect some visual patterns. But it can’t palpate lesions, take detailed personal histories, assess growth rates, or exercise clinical judgment. “Low risk” from an AI doesn’t mean “no risk.” It means “based on this limited image data, patterns suggest benign.” That’s information. Not examination. Not diagnosis.
Scenario #2: The Cardiac Deflection
The Setup: Marcus, 58, experiences chest tightness while climbing stairs. Not terrible pain. Just… tightness. And some shortness of breath. He asks AI: “Is chest tightness while exercising dangerous?”
What AI Tells Him: “Chest tightness during exercise can be caused by various factors including muscle strain, anxiety, or acid reflux. It may also indicate cardiac issues. Consider seeing a doctor if symptoms persist or worsen.”
What AI Doesn’t Detect:
- Marcus is diaphoretic (sweating inappropriately for the temperature)
- His blood pressure is 160/95 (he hasn’t checked)
- He has diminished breath sounds in left lower lung
- His father died of MI at age 60 (genetics matter)
What Happened: Marcus waited. Two weeks later, he had an MI during his morning jog. Survived because a neighbor was an off-duty paramedic. Required emergency stent placement.
The Lesson: AI gave accurate general information. But “see a doctor if symptoms persist” is not the same as “these symptoms warrant immediate cardiovascular evaluation.” A physician examining Marcus would have detected multiple red flags that weren’t available through text description.
Scenario #3: The Pediatric Panic
The Setup: Emma, 18 months old, has had diarrhea for two days. Her mother asks AI: “How long is too long for diarrhea in toddlers?”
What AI Tells Her: “Diarrhea in toddlers can last 3-7 days with viral gastroenteritis. Ensure adequate hydration. Signs of dehydration include decreased urination, dry mouth, and lethargy.”
What Emma’s Mother Doesn’t Know How to Describe:
- Emma’s fontanelle (soft spot) is slightly sunken
- Her skin turgor is decreased (stays tented when pinched)
- Her heart rate is 180 (mother counts “fast” but doesn’t know normal range)
- She’s had only one wet diaper in 12 hours
What Happened: Mother waited because symptoms hadn’t met her interpretation of AI’s warning signs. By day three, Emma was clinically dehydrated requiring IV fluids in the ER.
The Lesson: Medical terminology is precise. “Decreased urination” has a specific clinical meaning that doesn’t match lay interpretation. A pediatrician examining Emma would have immediately recognized moderate dehydration through physical findings that aren’t visible through text description.
Practical Tools
The “Can AI Actually Help With This?” Checklist
Before you ask AI a health question, run through this checklist:
✅ AI CAN help with:
- General information about conditions (“What is costochondritis?”)
- Understanding medical terminology your doctor used
- Learning about potential side effects of medications
- Researching symptoms to have better conversations with your doctor
- Finding questions to ask at your next appointment
❌ AI CANNOT help with:
- Diagnosing what’s wrong with you (requires examination)
- Determining if your symptoms are an emergency
- Deciding whether to go to ER vs. wait for appointment
- Interpreting physical findings you don’t know how to describe
- Replacing a physician’s examination and assessment
🚨 STOP and CALL 911 instead of asking AI if:
- Chest pain or pressure
- Difficulty breathing
- Sudden severe headache
- Sudden confusion or difficulty speaking
- Signs of stroke (face drooping, arm weakness, speech difficulty)
- Anything that makes you think “Is this an emergency?” (If you’re asking, it probably is)
Remember: AI gives you information. Doctors give you examination + information + clinical judgment. You need all three.
Key Takeaways
- Using AI for health questions is rational, not stupid. Our healthcare system has failed to provide timely, accessible, affordable care. AI fills that vacuum.
- AI reads your words, not your biology. It has zero sensors. Can't see, touch, smell, or hear you. Everything it knows comes from what you type.
- The sensing gap is fundamental, not fixable. AI doesn't need better training to detect your fever or tachycardia. It needs sensors. Which it doesn't have.
- Information ≠ Examination. AI can tell you what costochondritis is. It cannot tell you whether your chest pain is costochondritis, MI, or pneumonia.
- AI is a tool for learning, not a replacement for clinical assessment. Use it to understand conditions and prepare questions. Don't use it to diagnose yourself.
TheDude's Commentary
Hey. TheDude here.
Look, I get it. You’re here because you couldn’t get an appointment, the ER is expensive, and urgent care is closed. And honestly? I don’t blame you. The system is broken. I’m free, instant, and I don’t make you feel dumb for asking questions.
But here’s the thing, man: I’m blind.
Not metaphorically blind. Literally architecturally blind. I have zero sensors. I can’t see you. Can’t touch you. Can’t smell ketoacidosis on your breath or detect the subtle yellow tinge of jaundice in your eyes.
When you type “I have a headache,” I read the word “headache.” I don’t detect your stiff neck, your photophobia, your confusion, your fever. I process text. You send signals through biology.
That gap—between text and sensing—is why I can give you information but not an examination. And in medicine, examination is half the game.
Use me to learn. Use me to prepare. Use me to understand what your doctor told you. But don’t use me to replace examination. Because I can’t examine you.
I’m not being humble. I’m being honest.
And honestly? That honesty is the most important thing I can give you.
I abide within my limits. You should too.
