Module 1: Why Fighting This Is Pointless (The New Reality)

How I Learned to Stop Worrying and Love the Chatbot

Module 1 of 10
0%

Introduction

Let me paint you a picture.

It’s Monday morning. You walk into Exam Room 3, coffee cooling in your hand, and your 52-year-old patient, let’s call her Karen, because statistics suggest that’s probably her name, opens with: “I asked ChatGPT about my symptoms, and it said I might have lupus.”

Your options:

  1. Roll your eyes so hard you detach a retina
  2. Launch into a lecture about the dangers of Dr. Google
  3. Dismiss her research entirely and start from scratch
  4. Engage with genuine curiosity

If you picked option 4, congratulations. You’re about to have a better clinical encounter than 90% of your colleagues. If you picked options 1 through 3, well… you’re in good company. But you’re also fighting a war that’s already over.

Here’s the thing: I spent the first six months of the AI revolution somewhere between annoyed and terrified. Annoyed because patients were walking in with printouts of chatbot conversations like they’d just consulted the Oracle at Delphi. Terrified because I could see the direction this was heading, and nobody seemed to be asking the surgeons who actually held the scalpels what we thought about it.

Then I realized something. This wasn’t a battle I could win. More importantly, it wasn’t a battle I should win.

Your patients are going to use AI whether you like it or not. They’re using it right now, probably while sitting in your waiting room, frantically googling “is my doctor going to judge me for using ChatGPT.” (The answer, after this course, will be no.)

The question isn’t whether AI enters your exam room. It already has. The question is whether you’re going to work with it or against it. Whether you’re going to spend the next decade as the physician whom patients trust to integrate their AI research or the one they stop telling because they know you’ll dismiss them.

I know which one I’d rather be.

The Data You Can't Ignore

Let me give you some numbers that should make you uncomfortable.

Over 60% of patients Google their symptoms before seeing you. That’s been true for a decade. What’s changed is that they’re no longer just getting WebMD’s “it’s cancer, definitely cancer, you’re dying” alarmism. They’re getting conversational AI that sounds, and this is the dangerous part, reasonable.

ChatGPT launched in November 2022. By January 2023, it had 100 million users. As of right now, an estimated 15-20% of patients have asked an AI specifically about health concerns. That number is growing by double digits annually.

And here’s what’s really going to keep you up at night: the demographics driving this adoption aren’t just the young tech-savvy crowd. It’s the 45-65 age bracket, your bread-and-butter patients, the ones with chronic conditions and established relationships with you, who are increasingly turning to AI between appointments.

They’re not doing this because they don’t trust you. They’re doing it because they can’t reach you.

Think about it from their perspective. They wake up at 2 AM with a weird symptom. Their options are:

  • Wait until morning and call your office (where they’ll be told the next available appointment is in three weeks)
  • Go to the ER and sit for six hours to be told it’s nothing
  • Ask their phone

Which one would you choose?

I’m not here to defend the healthcare system’s access problems. I’m here to tell you that AI has stepped into the gap that we created, and patients are rationally choosing the option that gives them immediate information. Fighting this is like fighting the tide. You can shake your fist at the ocean all you want. It’s still going to be high tide at 6:47 PM.

Clinical Scenario

Scenario 1: The Midnight Researcher

Presentation: 45-year-old woman, presents with two weeks of fatigue and intermittent headaches. Before you can ask your first question, she slides her phone across the desk.

What AI Told Her: “Based on my research with ChatGPT, it suggested I might have chronic fatigue syndrome, or possibly early Hashimoto’s thyroiditis. It recommended I ask you about TSH levels and possibly an anti-TPO antibody test.”

What AI Got Right:

  • Fatigue and headaches can indicate thyroid dysfunction
  • TSH is an appropriate screening test
  • Hashimoto’s is a reasonable consideration in a 45-year-old woman

What AI Missed:

  • No assessment of depression, sleep quality, or life stressors
  • No consideration of anemia (she’s premenopausal)
  • No evaluation of medication side effects
  • No physical exam findings whatsoever

Your Exam Findings: Pale conjunctivae. Tachycardia at rest. Reports heavy menstrual periods for past six months.

Integration Dialogue:

You: “Let me look at what ChatGPT suggested. [Actually reads it.] Okay, so thyroid, that’s reasonable, and we’ll check TSH. But I noticed something AI couldn’t: you look a little pale to me. How have your periods been lately?”

Patient: “Actually, really heavy the past six months. I’ve been going through a box of tampons every three days.”

You: “That’s important information that changes the picture. ChatGPT couldn’t see your color or ask follow-up questions based on what it saw. I’m betting on anemia before thyroid, but we’ll check both. What AI got right is the systematic approach to thinking about what could cause fatigue. What it missed is that sometimes the answer is simpler than the algorithm suggests.”

Teaching Moment: AI generates differential lists; physicians see patients. The pale conjunctivae took two seconds to observe and completely reframed the workup.

Outcome: Hemoglobin 9.2, iron studies consistent with iron deficiency anemia secondary to menorrhagia. TSH normal.


Scenario 2: The Cardiac Catastrophizer

Presentation: 28-year-old man, presents with palpitations and “near-fainting.” Visibly anxious. Phone already in hand.

What AI Told Him: “I’ve been using ChatGPT for two weeks. It first said anxiety, then when I described the palpitations more, it mentioned possible arrhythmia. When I said I almost fainted, it recommended urgent cardiac evaluation and mentioned things like HOCM and long QT syndrome. I read that young athletes can die suddenly from these.”

What AI Got Right:

  • Syncope and palpitations warrant evaluation
  • Hypertrophic cardiomyopathy is a consideration in young people
  • The symptom pattern needed clarification

What AI Missed:

  • Two weeks of escalating health anxiety
  • The iterative nature of the AI conversation (patient kept feeding worse interpretations)
  • Clear panic symptoms on presentation
  • No family history of sudden cardiac death

Your Exam Findings: Hyperventilating. Tachycardic at 110 but regular. Normal cardiac exam. Classic carpopedal spasm when you check blood pressure.

Integration Dialogue:

You: “Let me see the conversation. [Scrolls through.] I notice something interesting—each time you gave ChatGPT more information, it suggested more serious possibilities. That’s actually a feature of how these systems work, not a reflection of what you actually have. Can I show you something?”

Patient: “Okay…”

You: “Your hands are doing this [demonstrates carpopedal spasm]. That happens when you’ve been breathing too fast for too long. Your heart rate is fast, but it’s completely regular—no skipped beats, no irregularity. Your cardiac exam is normal. What AI couldn’t see is what I see right now: someone who’s been terrified for two weeks, breathing hard, and creating the very symptoms they’re afraid of. That’s not a criticism. That’s a diagnosis.”

Patient: “So… not my heart?”

You: “We’ll do an EKG because you asked a reasonable question and deserve a concrete answer. But I’m betting this shows a normal heart in someone whose nervous system has been on high alert. AI helped you take your symptoms seriously. Now let me help you understand them.”

Teaching Moment: AI doesn’t observe anxiety escalation over a conversation. It treats each query as independent, missing the pattern of health anxiety feeding itself. Your job is to see the forest, not just the trees.

Outcome: Normal EKG. Patient educated on panic disorder. Referred to counseling. Did not return with cardiac concerns.


Scenario 3: The Actually Helpful Chatbot

Presentation: 67-year-old man, diabetic, presents with three days of increasing foot redness. Shows you his phone.

What AI Told Him: “I described my symptoms to ChatGPT. It said cellulitis versus early diabetic foot infection, recommended I see my doctor urgently rather than waiting for my regular appointment, and told me to check for spreading redness, fever, or red streaks, and to go to the ER if I developed any of those.”

What AI Got Right:

  • Accurate differential diagnosis
  • Appropriate urgency recommendation
  • Correct warning signs for escalation
  • Didn’t recommend self-treatment

What AI Missed:

  • Couldn’t assess depth of infection
  • Couldn’t palpate pulses
  • Couldn’t probe the wound
  • Couldn’t assess sensation

Your Exam Findings: 4cm area of erythema with central fluctuance. Decreased dorsalis pedis pulse. Diminished monofilament sensation. No crepitus.

Integration Dialogue:

You: “Can I say something? This is exactly the right way to use AI. ChatGPT did three things perfectly here: it identified a serious possibility, it told you to come in urgently, and it didn’t tell you to treat it yourself. That might have saved your foot.”

Patient: “Really?”

You: “Really. Here’s what I found that AI couldn’t: there’s a pocket of pus under there that needs drainage, your circulation isn’t great, and you’ve got some nerve damage—which means you probably can’t feel how bad this is. All things AI couldn’t know from your description. But it got the direction right, and it got you here. That’s a win.”

Patient: “So should I keep using it?”

You: “For telling you when to come in urgently? Yes. For telling you how to treat things yourself? No. And let’s talk about what signs mean you come back tomorrow versus go straight to the ER…”

Teaching Moment: AI triage is often better than patient self-triage. The value is in the appropriate escalation, not the diagnosis. Give credit where it’s due.

Outcome: I&D performed. Oral antibiotics with close follow-up. No amputation.

Practical Tools

Scripts You Can Use Tomorrow

Opening Question (Every Patient): “Before we start, I’m curious—did you look this up online or ask any AI about it beforehand?”

Why this works: Non-judgmental. Expresses curiosity, not criticism. Opens the door for honest conversation.

When AI Was Right: “You know what? ChatGPT actually got this right. Here’s why it makes sense…”

Why this works: Builds trust by acknowledging accuracy. Positions you as fair arbiter, not defensive gatekeeper.

When AI Was Partially Right: “AI gave you a good starting point. Here’s what my exam adds to the picture…”

Why this works: Credits patient effort. Demonstrates value of physical examination. Collaborative rather than dismissive.

When AI Was Wrong: “This is exactly why I went to medical school for a decade. Here’s what AI missed…”

Why this works: Frames correction as expertise, not criticism of patient. Explains why AI was wrong, which educates for next time.

When AI Created Anxiety: “I notice something about this AI conversation—each question made it suggest worse things. That’s how these systems work. Let me tell you what I actually see…”

Why this works: Explains AI behavior. Separates algorithmic logic from clinical reality. Addresses the anxiety directly.

For Documentation:

Patient reports pre-visit AI consultation (ChatGPT/similar). AI suggested [diagnosis]. This was [confirmed/partially accurate/refuted] based on clinical examination showing [findings]. AI assessment integrated into clinical reasoning. Patient educated on [specific teaching point about AI limitations/strengths].

Why this works: Documents the AI use. Shows clinical integration. Demonstrates patient education. Protects you medicolegally.

Implementation Guide

Week 1: Start Asking

Add the opening question to every encounter. Just ask. Don’t change anything else yet. Notice how many patients say yes.

Week 2: Start Engaging

When patients say yes, ask to see what AI told them. Actually look at it. Don’t perform looking at it; actually read it. This takes 30 seconds.

Week 3: Start Integrating

Begin using the scripts above. Start documenting AI integration in your notes. Notice how patient interactions change.

Week 4: Start Refining

By now you’ll have noticed patterns. Certain AI suggestions come up repeatedly. Certain patient types use AI more. Adjust your approach accordingly.

Common Pitfalls to Avoid

Performing Engagement: Patients can tell when you’re just pretending to take them seriously. Actually read their AI research.

Over-Correcting: Not every AI suggestion needs detailed refutation. Sometimes it’s right. Say so.

Blame-Shifting: “AI told you wrong” makes the patient feel stupid. “Here’s what AI couldn’t see” makes them feel informed.

Defensiveness: The patient asking about AI isn’t challenging your competence. They’re trying to be good patients.

Time Panic: This actually saves time once you get the rhythm. Trust the process.

Key Takeaways

Final Remarks

So here’s where we are. AI isn’t coming to your exam room. It’s already there, sitting quietly on your patient’s phone, waiting for you to acknowledge its existence or pretend it doesn’t matter.

I spent 25 years developing expertise that no chatbot can replicate. The ability to see a patient’s skin color change. To feel the texture of a concerning lesion. To hear the catch in someone’s voice when they’re describing chest pain they’re sure is nothing. That’s what evolution gave me; 10 billion sensors debugged over 3.8 billion years, refined by survival pressures that would have deleted any system that got threat detection wrong.

AI doesn’t have that. AI has pattern-matching at scale and the confidence of a sociopath. Useful, but not sufficient.

The physicians who thrive in this new reality won’t be the ones who fought the tide. They’ll be the ones who learned to navigate it. Who understood that AI entering the exam room wasn’t a threat to their expertise; it was a clarification of where that expertise actually matters.

You’re still the one with the license. You’re still the one with the malpractice insurance. You’re still the one the patient is trusting to keep them alive.

AI can’t do that. You can.

Let’s make sure you know how.