Module 2: The Opening Question That Changes Everything
How Seven Words Can Turn a Defensive Patient Into a Collaborative Partner
Introduction
I’m going to give you the most valuable seven words you’ll learn this year.
But first, let me tell you about the patient who taught them to me.
She was 34, presenting with abdominal pain. I walked in with my usual opener, “What brings you in today?” and she launched into a description that was… oddly specific. Right lower quadrant. Worse with movement. Started periumbilical and migrated. She even used the word “rebound.”
I was impressed. And suspicious.
“That’s a very precise description,” I said. “Have you had this before?”
She looked at the floor. “I… may have asked ChatGPT about it before coming in.”
And there it was. The sheepish admission. The avoiding eye contact. The tone of a kid caught cheating on a test.
She was embarrassed. Embarrassed for doing exactly what any rational person would do when experiencing concerning symptoms in a healthcare system that told her the next available appointment was in three weeks.
Here’s what I could have done: gotten defensive, reminded her that I’m the one with the medical degree, launched into my differential without acknowledging her research.
Here’s what I actually did: “That’s completely normal. Most people do some research before coming in. What did ChatGPT tell you?”
Her whole body relaxed. She showed me her phone. ChatGPT had, in fact, suggested appendicitis as a possibility, which is why she’d insisted on being seen urgently. It had also listed six other differentials, explained McBurney’s point, and told her that rebound tenderness was a concerning sign.
She didn’t need me to diagnose appendicitis. She’d basically diagnosed herself. What she needed was confirmation, physical examination, and someone to get her to a surgeon.
I could have wasted fifteen minutes re-explaining everything she already knew. Instead, I said: “Let’s see if your research was right.”
It was. She was in the OR four hours later.
That encounter changed how I practice medicine. Because I realized that the question I asked, or didn’t ask, determined whether her AI research helped me or whether I had to work around it.
Seven words: “Did you look this up before-hand?”
Everything flows from there.
2.1 The Magic Phrase (And Why Every Word Matters)
Here’s the phrase. Memorize it. Use it. Watch what happens.
“Before we start, I’m curious—did you look this up online or ask any AI about it?”
Let me break down why this specific phrasing works:
“Before we start” — Signals that their research is part of the process, not separate from it. You’re incorporating their pre-work into your clinical encounter.
“I’m curious” — Non-threatening. Expresses genuine interest rather than judgment. Curiosity is disarming.
“Did you look this up online” — Covers Google, WebMD, Reddit, all the classics. Doesn’t single out AI specifically, which might feel more loaded.
“Or ask any AI about it” — Normalizes AI as just another research tool. Not special. Not threatening. Just… a thing people do.
The tone matters as much as the words. This isn’t an interrogation. It’s not “Did you consult Dr. Google again?” with an eye roll. It’s genuine, open curiosity.
Try it with the wrong tone and you’ll get defensive patients who lie to you. Try it with the right tone and you’ll get patients who hand you their phone and walk you through their entire thought process.
One question. Two completely different clinical encounters.
2.2 What Happens When They Say Yes
Here’s the typical response you’ll get:
Patient, looking slightly embarrassed: “Yeah, I asked ChatGPT about it…”
And here’s where most physicians blow it. They hear the admission and immediately move on: “Okay, well, let me take a look.” Dismissive. Conversation closed. Information lost.
Instead, try this:
“That’s completely normal. Most people do some research before coming in. What did it tell you?”
Watch what this accomplishes:
Removes shame. They were expecting judgment. You gave them permission. The relief is palpable.
Shows you’re not threatened. A physician who’s secure in their expertise isn’t afraid of chatbot competition. You’re demonstrating confidence by engaging rather than dismissing.
Reveals their framework. Now you know what they’re thinking. What they’re afraid of. What differential they’ve already considered and rejected.
Provides diagnostic information. Their AI query tells you what symptom concerned them enough to research. Their interpretation of AI’s answer tells you their health literacy level. Their current emotional state tells you how much reassurance they’ll need.
This is free intelligence. Take it.
2.3 The Query Is the Diagnosis
Here’s something that took me years to understand: what patients type into ChatGPT is often more diagnostically valuable than what they say in the exam room.
Think about it. When a patient talks to you, they’re filtering. They’re trying to seem reasonable. They’re worried about looking like hypochondriacs. They’re editing their story based on what they think you want to hear.
But when they type a query into AI at 2 AM, terrified and alone? That’s the unfiltered version.
Let me give you some examples:
Query: “chest pain heart attack symptoms” Translation: They’re afraid they’re dying. Cardiac anxiety is the driver. Even if the pain is obviously musculoskeletal, you’ll need to address the cardiac fear directly.
Query: “chest pain after eating” Translation: They’ve connected it to meals. They’re thinking GI. They probably won’t need cardiac reassurance—they’re already on the GERD track.
Query: “chest pain anxiety attack vs heart attack” Translation: They suspect it might be panic, but they’re not sure. They want permission to believe it’s not cardiac without feeling like they’re being dismissed.
Query: “chest pain costochondritis treatment” Translation: They’ve already diagnosed themselves. They’re looking for management, not diagnosis. They want validation, not a workup.
Four patients with chest pain. Four completely different concerns. Same chief complaint, radically different conversations.
If you don’t ask about their AI use, you’re flying blind. You’re guessing which of these patients is in front of you. You’re spending time ruling out fears they’ve already dismissed while ignoring the fear that’s actually driving them.
The query is the diagnosis. Ask for it.
2.4 The Three Follow-Up Questions
Question 1: “What about that answer worried you?”
This gets at emotional reality. AI gives information; patients have feelings about that information. Sometimes the AI answer reassured them completely and they just want confirmation. Sometimes the AI answer terrified them and they need help processing it.
You can’t know which until you ask.
Example: Patient googled “persistent headache causes” and AI mentioned brain tumors. They might say “nothing, really, I just wanted to check.” But if you ask what worried them, you might get: “My aunt died of a brain tumor last year. When I saw that on the list…”
Now you understand the encounter. Now you can address what actually matters.
Question 2: “What didn’t AI address that you’re still wondering about?”
This identifies the gaps. AI might have given them a differential, but it couldn’t answer their specific question. Maybe they want to know if they can still exercise. Maybe they want to know if this could affect their pregnancy. Maybe they want to know if their insurance will cover treatment.
These are the questions they came to you for. The AI part is done; now they need a human.
Question 3: “What do you think is going on?”
This is the most important question, and most physicians never ask it.
Patients have theories. They might be wrong, often are, but they have them. And if their theory conflicts with yours, you’ll have a compliance problem later if you never surfaced the conflict.
Example: You diagnose tension headache. You prescribe stretching and ibuprofen. Patient nods, takes the prescription, and never fills it because they’re convinced it’s a tumor and ibuprofen won’t help with cancer.
If you’d asked what they thought was going on, you could have addressed the cancer fear directly. Now you’ve got a patient who doesn’t trust your diagnosis and won’t follow your treatment.
Ask. It takes ten seconds. It saves hours.
2.5 The Validation-Examination Loop
Here’s a framework that works for almost every AI-informed encounter:
Step 1: Validate the effort “Smart thinking to research this before coming in.”
Step 2: Acknowledge the AI assessment “ChatGPT suggested [X]. That’s definitely a reasonable consideration.”
Step 3: Transition to examination “Let me examine you and see if that fits what I find.”
Step 4: Provide your assessment in context “Based on my exam, here’s what I think…”
This loop positions you as the validator, not the competitor. You’re not fighting AI; you’re grading its homework. That’s a completely different dynamic.
Notice what you’re not doing: dismissing their research, ignoring what AI said, or pretending the conversation started when they walked through your door.
What you are doing: integrating their pre-work into your clinical encounter. Making them feel like a collaborative partner, not a passive recipient. Demonstrating that your exam adds information AI couldn’t provide.
Everyone wins. Patient feels heard. You get better information. AI gets appropriate credit. Clinical encounter flows smoothly.
2.6 The Defensive Patient Problem
Sometimes you’ll ask the opening question and get this:
Patient, arms crossed: “Yeah, I looked it up. And before you tell me not to use the internet for medical advice…”
They’re bracing for judgment. They’ve been scolded before. They’re ready to fight.
Do not take the bait.
You: “Actually, I think it’s smart that you did your research. I’d rather you come in informed than confused. What did you find?”
Watch what happens. The defensiveness evaporates. They were expecting dismissal; you gave them respect. Now you have an ally instead of an adversary.
Here’s the thing about defensive patients: they’re not actually defending their AI use. They’re defending their right to be taken seriously. The AI isn’t the issue—the relationship is.
If a patient walks in expecting to be judged, you have a trust problem that existed before they ever opened ChatGPT. The AI use is just the surface manifestation. Address the relationship, and the AI integration takes care of itself.
2.7 The "I Didn't Look It Up" Response
About 30-40% of patients will say no, they didn’t look it up beforehand.
Some of them are telling the truth. Some of them are lying because they’re embarrassed or because a previous physician made them feel stupid for researching symptoms.
Either way, your response is the same:
You: “That’s fine either way. If you do look things up later, feel free to bring what you find to our next appointment. I’d rather help you understand good information than have you wondering alone.”
This accomplishes two things:
For truth-tellers: You’ve normalized future AI use. If they develop new symptoms later, they know you’re a safe person to bring their research to.
For fibbers: You’ve given them implicit permission. They might not admit to AI use today, but they’re more likely to next time.
You’re building a practice culture where patients feel safe disclosing their research. That takes time. Every encounter where you respond with curiosity instead of judgment moves the needle.
Clinical Scenario
Scenario 1: The Hidden Agenda
Presentation: 58-year-old man, presents with “just a routine checkup.” No specific complaints. Seems oddly tense for someone with nothing wrong.
Your Opening Question: “Before we go through the usual checkup items, I’m curious; did anything prompt this visit? Sometimes people look things up and want to get checked.”
Patient Response: Long pause. “I… may have asked ChatGPT about some symptoms I’ve been having.”
What AI Told Him: He’s been having intermittent chest pressure for a month. ChatGPT told him it could be angina, especially given his age and “sedentary lifestyle” (his description). It recommended he see a doctor for an EKG and possibly a stress test.
What AI Got Right:
- Appropriate concern for cardiac symptoms at his age
- Correct recommendation for evaluation
- Reasonable mention of risk factors
What AI Missed:
- His father died of MI at 60 (family history he never typed)
- He’s been avoiding this appointment for weeks out of fear
- The “routine checkup” framing was his way of minimizing because he’s terrified
Your Exam Findings: BP 148/92 (elevated). Mild obesity. No acute findings, but high-risk profile.
Integration Dialogue:
You: “I’m really glad you came in. ChatGPT gave you good advice; these symptoms do need to be checked. Can I ask why you framed this as a routine checkup?”
Patient: “I guess I didn’t want to say it out loud. My dad died of a heart attack at 60. I’m 58.”
You: “That’s exactly the kind of information AI couldn’t have known unless you told it. And it completely changes how I think about your risk. Here’s what we’re going to do…”
Teaching Moment: The opening question revealed that “routine checkup” was a cover story. Without it, you might have done standard screening and missed the urgent cardiac workup. The query unlocked the real visit.
Outcome: Stress test showed inducible ischemia. Cath revealed 80% LAD stenosis. Stented. Patient doing well.
Scenario 2: The Sophisticated Researcher
Presentation: 32-year-old software engineer, presents with two weeks of fatigue. Before you can ask your opening question, she hands you a printed document.
What She Hands You: A three-page ChatGPT conversation with follow-up questions, differential diagnosis, and a list of labs she’d like you to order: TSH, CBC, CMP, ferritin, vitamin D, B12.
What AI Got Right:
- Comprehensive differential for fatigue
- Appropriate initial lab panel
- Systematic approach to workup
What AI Missed:
- She’s a new mother (8-month-old at home)
- Sleeping 4-5 hours per night in fragments
- Returned to demanding job 2 months ago
- Breastfeeding (relevant for nutritional considerations)
Your Opening Response:
You: “This is impressively thorough. You clearly put real thought into this. I’m going to use this as a starting point. But let me ask some things that AI couldn’t have known to ask…”
Patient: “Like what?”
You: “How old is your baby, and how’s the sleep situation?”
Patient: Bursts into tears.
Integration Dialogue:
You: “Here’s what I think is happening. Your lab list is smart, and we’ll check those things. But I’d bet a significant amount that when we get those results back, they’ll be mostly normal; maybe a little low on iron and D, which we’d expect with breastfeeding. The fatigue isn’t a mystery; it’s math. You’re running on 4 hours of sleep while working full-time and feeding another human being with your body.”
Patient: “But I should be able to handle this. Other people do.”
You: “ChatGPT gave you a medical differential. Let me give you a human one: you’re exhausted because your situation is exhausting. That’s not weakness; that’s reality. We’ll rule out the medical stuff, but we also need to talk about support systems, because no amount of B12 is going to fix sleep deprivation.”
Teaching Moment: Sophisticated AI users can develop comprehensive differentials that miss obvious contextual factors. The opening question pivot, acknowledging her research while asking what AI couldn’t know, revealed the actual diagnosis.
Outcome: Labs unremarkable except mild iron deficiency (expected with lactation). Supplemented iron. More importantly: connected with lactation consultant, adjusted work expectations, spouse took over night feeds twice weekly. Fatigue resolved in six weeks.
Scenario 3: The AI Escalation Spiral
Presentation: 24-year-old woman, presents with “tingling in my hands.” Visibly anxious. Has been to three other physicians in the past month, all of whom told her nothing was wrong.
Your Opening Question: “I see you’ve been seen for this a few times. Before we start fresh, I’m curious; have you been researching this online between appointments?”
Patient Response: “I’ve been asking ChatGPT for two months. Every time a doctor says I’m fine, I ask ChatGPT why they might be wrong, and it gives me more things to worry about.”
What AI Told Her: Initial query suggested carpal tunnel or ulnar neuropathy. When she reported normal EMG, AI suggested “small fiber neuropathy, which doesn’t show on standard testing.” When she reported normal skin biopsy, AI suggested “autoimmune conditions that can be seronegative.” Each reassurance from physicians prompted a new query about what they might have missed, and AI always had an answer.
What AI Got Right:
- Each individual answer was technically accurate
- Rare conditions do sometimes present atypically
What AI Missed:
- The pattern of escalating anxiety
- The iterative nature of health anxiety seeking
- That “rare conditions” become increasingly unlikely after comprehensive negative workups
Your Exam Findings: Normal neurological exam. Classic hyperventilation pattern. Hands are cold and clammy. Symptoms reproducible with sustained hyperventilation.
Integration Dialogue:
You: “Can I show you something about your ChatGPT conversations? [She shows phone.] Look at the pattern. First query: tingling. Answer: probably carpal tunnel. Normal test. Second query: what else could cause tingling? Answer: small fiber neuropathy. Normal test. Third query: what if those tests miss something? Answer: seronegative autoimmune. Do you see what’s happening?”
Patient: “It keeps finding things…”
You: “Because you keep asking it to. That’s not a flaw in the AI; it’s doing exactly what you’re asking. ‘What rare thing could this be?’ always has an answer. There’s always something rarer. But here’s what AI can’t tell you: at some point, after enough normal tests, the answer isn’t a rarer disease. The answer is that you don’t have a disease. What you have is anxiety about having a disease, which is causing real symptoms, that’s the tingling, which is causing more anxiety. It’s a loop.”
Patient: “So the tingling isn’t real?”
You: “The tingling is completely real. It’s just not coming from nerve damage. It’s coming from breathing fast and shallow because you’re scared. Watch: breathe with me for 60 seconds, and let’s see what happens to the tingling.”
Teaching Moment: AI becomes harmful when used to feed health anxiety spirals. Identifying the pattern of escalating queries is diagnostic and therapeutic when you explain the loop to the patient.
Outcome: Patient accepted referral to health anxiety specialist. Still symptomatic at one month but no longer doctor-shopping. At six months, tingling resolved with anxiety treatment.
Practical Tools
The Opening Question Variations
Choose based on your style and patient population:
Standard Version: “Before we start, I’m curious—did you look this up online or ask any AI about it?”
Warmer Version: “A lot of people research their symptoms before coming in—I would too. Did you happen to look anything up?”
Direct Version: “Did you Google this or ask ChatGPT before coming in? I’d like to know what you found.”
For Established Patients: “Did your friend Dr. Google have any opinions about this?”
For Tech-Savvy Patients: “Any AI-assisted differential diagnosis before we start?”
Response Templates
When They Admit AI Use (Sheepish): “That’s completely normal. Most people do research before coming in. What did it tell you?”
When They Admit AI Use (Defensive): “Good. I’d rather you come in informed than confused. Walk me through what you found.”
When They Deny AI Use: “That’s fine either way. If you do look things up later, bring what you find next time. I’d rather help you make sense of it.”
When They Hand You a Printout: “This is thorough. Let me read through it and then add what my exam shows.”
Follow-Up Question Templates
For Emotional Context: “What about that answer worried you most?”
For Identifying Gaps: “What didn’t the AI address that you’re still wondering about?”
For Patient Theories: “Based on everything you’ve read and felt, what do you think is going on?”
For Clarifying AI Source: “Was this ChatGPT, or a different AI? Some are more medical-focused than others.”
Documentation Template
AI Consultation Documented: Patient reports pre-visit AI research (ChatGPT) regarding [chief complaint]. AI Suggestion: [What AI told patient] Patient Concern: [What worried them about AI response] Clinical Correlation: AI suggestion was [confirmed/partially supported/not supported] by clinical examination showing [findings]. Integration: Discussed AI assessment with patient. Educated on [specific point about AI accuracy/limitations]. Plan: [Your actual clinical plan]
Implementation Guide
Making It Automatic
The opening question only works if you ask it consistently. Here’s how to make it habitual:
Week 1: Write the question on a sticky note. Put it on your laptop. Ask it for every patient.
Week 2: Notice which patient types say yes most often. Adjust your approach for each demographic.
Week 3: Train your MA/nurse to ask during rooming. They can document it in the chart before you enter.
Week 4: By now it’s habit. You’ll feel strange starting an encounter without asking.
Time Management
The fear is that AI discussion adds time. The reality is usually the opposite.
Time lost to skipping the question:
- Explaining differentials patient already knows: 5 minutes
- Addressing hidden cardiac fear they never mentioned: 8 minutes
- Managing defensiveness when you dismiss their research: 10 minutes
- Repeating information they already read: 5 minutes
Time spent asking the question:
- Opening question: 10 seconds
- Reviewing their AI summary: 1-2 minutes
- Starting from where they already are: saves 5-10 minutes
Net time saved: 5-15 minutes per encounter.
Common Pitfalls
Asking with judgment in your voice: The question is only as good as your tone. Practice saying it neutrally.
Not actually reading what they show you: Patients can tell when you’re faking engagement. Take 30 seconds to actually read it.
Moving past it too quickly: The opening question is a door. Walk through it. Ask follow-ups.
Forgetting to document: AI integration is new territory for malpractice. Document what they told you and how you addressed it.
Key Takeaways
- One question changes everything. "Did you look this up beforehand?" transforms encounters from adversarial to collaborative.
- The patient's query reveals their real fear. What they typed into ChatGPT at 2 AM is more honest than what they tell you face-to-face.
- Tone matters more than words. The same question asked with curiosity versus judgment produces completely different responses.
- Three follow-ups unlock the encounter: What worried you? What didn't AI address? What do you think is going on?
- This saves time, not costs it. Starting from what they already know is faster than starting from zero.
- You're building a practice culture. Every non-judgmental response makes patients more likely to disclose next time.
Final Remarks
So here’s where we are. AI isn’t coming to your exam room. It’s already there, sitting quietly on your patient’s phone, waiting for you to acknowledge its existence or pretend it doesn’t matter.
I spent 25 years developing expertise that no chatbot can replicate. The ability to see a patient’s skin color change. To feel the texture of a concerning lesion. To hear the catch in someone’s voice when they’re describing chest pain they’re sure is nothing. That’s what evolution gave me; 10 billion sensors debugged over 3.8 billion years, refined by survival pressures that would have deleted any system that got threat detection wrong.
AI doesn’t have that. AI has pattern-matching at scale and the confidence of a sociopath. Useful, but not sufficient.
The physicians who thrive in this new reality won’t be the ones who fought the tide. They’ll be the ones who learned to navigate it. Who understood that AI entering the exam room wasn’t a threat to their expertise; it was a clarification of where that expertise actually matters.
You’re still the one with the license. You’re still the one with the malpractice insurance. You’re still the one the patient is trusting to keep them alive.
AI can’t do that. You can.
Let’s make sure you know how.
