Module 10: Implementation & Documentation (Making It Stick)

How to Actually Do This Tomorrow Without Your Workflow Collapsing

Module 10 of 10
90%

Introduction

Here’s what happens to most CME courses: you learn something valuable, you nod along, you think “I should do this,” and then Monday morning arrives and your schedule has 28 patients and your EHR is doing that thing where it logs you out every seven minutes and the whole thoughtful framework you learned evaporates into the reality of practice.

I’ve taken those courses. I’ve forgotten those courses. I’ve been the physician who meant to implement something and never did.

So let me be direct about what this module is: it’s the part where we take everything from Modules 1-9 and make it actually happen in your practice. Not theoretically. Not eventually. Tomorrow.

Because here’s the truth about AI integration: it’s not one big decision. It’s a hundred small habits. Asking the opening question every time. Narrating your exam findings. Documenting AI involvement appropriately. Having ready responses for common scenarios. Creating systems that make the right thing the easy thing.

If you try to remember all of this through willpower, you’ll fail. I would fail. Anyone would fail. The cognitive load of practice is already overwhelming; you can’t add “remember to integrate AI discussions thoughtfully” to your mental checklist and expect it to stick.

What works is workflow integration—building AI discussion into your existing processes so thoroughly that it becomes automatic. Making documentation templates so you’re not reinventing language every time. Training staff so you’re not the only one asking the questions. Creating patient education materials so you’re not explaining the same concepts fifty times a week.

That’s what this module delivers: the operational infrastructure that turns “good idea” into “how we practice.”

10.1 The Workflow Integration Framework

The goal is simple: make AI discussion so embedded in your process that you can’t forget it.

Pre-Visit (Intake)

Your medical assistant asks during rooming: “Did you research your symptoms online or with AI before coming in today?”

This accomplishes several things:

  • Normalizes the question (everyone gets asked)
  • Gets the information before you enter the room
  • Documents AI involvement in the intake
  • Signals to patients that you’re open to discussing their research

The MA documents the response in a consistent location—chief complaint, HPI, or a dedicated field if your EHR supports it.

Visit Start (Your Opening)

Even if the MA already asked, you confirm: “I see you did some research before coming in. What did you find?”

Or if they said no to the MA: “Totally fine. If you do look things up later, feel free to bring what you find to our next appointment.”

This takes 10-15 seconds. It becomes automatic within a week.

Mid-Visit (Integration)

Based on what AI told them:

  • If AI was right: “ChatGPT got this right. Here’s what my exam confirms…”
  • If AI was partially right: “AI was on the right track. Let me add what I’m finding…”
  • If AI was wrong: “AI missed something important. Here’s what I’m detecting…”

Narrate your physical findings explicitly. Make your sensing visible.

Visit End (Education)

Provide the teaching moment appropriate to this encounter: when to override AI, what AI can’t detect, how to use AI better next time, when to call regardless of what AI says.

Post-Visit (Documentation)

Use templated language to document AI involvement consistently.

10.2 Staff Training Essentials

Your workflow is only as good as your team’s understanding of it.

For Medical Assistants/Nurses:

Train them on:

  • Why we ask about AI use (it’s information, not judgment)
  • How to document the response
  • What to flag for your attention (e.g., patient stopped medication based on AI)
  • How to respond to patients who seem defensive

Script for MAs:

“Before the doctor comes in, I have a quick question: Did you look up your symptoms online or ask an AI like ChatGPT about them? A lot of people do, and the doctor likes to know what you found so they can build on it.”

This framing is important—”so they can build on it” signals that AI research is respected, not dismissed.

For Front Desk Staff:

They should know:

  • AI use is normal and expected
  • Patients shouldn’t feel embarrassed about it
  • The practice takes a collaborative approach

They don’t need to understand the clinical details, just the culture: we’re AI-friendly in this practice.

10.3 Documentation Templates

Documentation serves two purposes: clinical communication and medicolegal protection. Your AI documentation should accomplish both.

Template 1: AI-Informed, Assessment Confirmed

Patient reports pre-visit AI consultation (ChatGPT) regarding [chief complaint]. AI suggested [diagnosis/recommendation]. Physical examination confirms this assessment with findings including [specific physical findings not available to AI—e.g., reproducible chest wall tenderness, normal cardiac exam, clear lungs]. Patient educated that AI assessment was accurate in this case; discussed the value of physical examination for confirmation. Patient understands appropriate use of AI for medical questions.

Template 2: AI-Informed, Assessment Partially Confirmed

Patient reports AI consultation suggesting [AI assessment]. Clinical evaluation partially supports this assessment; however, additional findings of [specific findings] indicate [refined/modified diagnosis]. AI assessment was reasonable but incomplete due to [specific limitation—inability to examine, missing clinical context, etc.]. Patient educated on AI limitations regarding [specific limitation]. Plan reflects integrated clinical assessment.

Template 3: AI-Informed, Assessment Corrected

Patient reports AI consultation suggesting [AI diagnosis]. Physical examination reveals [findings inconsistent with AI assessment], specifically [detailed findings]. AI assessment did not account for [physical findings/contextual factors]. Correct diagnosis: [your diagnosis]. Patient educated on limitation of AI for [specific type of assessment]. Discussed importance of in-person evaluation when [specific scenario]. Patient verbalized understanding.

Template 4: AI-Influenced Delay in Care

Patient reports [X]-day delay in presentation based on AI (ChatGPT) reassurance that symptoms were likely [AI’s assessment]. On examination, findings consistent with [actual diagnosis] including [specific findings]. AI assessment was inadequate due to [specific limitation—inability to assess vital signs, physical findings, symptom severity]. Patient counseled on AI limitations for [specific scenario] and appropriate thresholds for seeking in-person evaluation regardless of AI guidance.

10.4 Smart Phrases and Shortcuts

If your EHR supports smart phrases or text expansion, create shortcuts:

.AIOK → “Patient consulted AI pre-visit. AI assessment consistent with clinical findings. Education provided regarding appropriate AI use.”

.AICORRECT → “Patient consulted AI pre-visit. AI assessment of [***] corrected based on physical examination revealing [***]. Patient educated on AI limitations.”

.AIDELAY → “Patient delayed care based on AI reassurance. Discussed importance of in-person evaluation for [***] symptoms regardless of AI guidance.”

.AINONE → “AI use discussed. Patient denies pre-visit AI consultation.”

These shortcuts reduce documentation burden from minutes to seconds.

10.5 Patient Education Materials

You shouldn’t have to explain the same concepts verbally to every patient. Create handouts that do the heavy lifting.

Handout 1: “What Your Doctor Can Detect That AI Cannot”

  • Your 10 billion sensors vs. AI’s zero
  • Physical findings AI can’t assess (skin color, breath sounds, tenderness, reflexes)
  • Why “probably fine” from AI isn’t the same as “confirmed fine” from examination
  • Examples of conditions where exam changes everything

Handout 2: “When to Come In (Even If AI Says Wait)”

  • The velociraptor test: when your body says something’s wrong
  • Red flags that override any AI reassurance
  • “Getting worse instead of better” as a universal rule
  • Parental instinct with children

Handout 3: “How to Use AI Wisely for Health Questions”

  • AI for education, physician for diagnosis
  • Questions to evaluate AI quality (Does it cite sources? Does it express uncertainty?)
  • When AI is helpful vs. when it’s dangerous
  • The “one question” rule for health anxiety

Handout 4: “Questions to Ask Your AI”

  • “What sources is this based on?”
  • “What might you be missing?”
  • “When should I see a doctor instead of following this advice?”
  • “What’s the worst-case scenario for these symptoms?”

These handouts serve as reinforcement of your verbal teaching. Give them to patients who seem engaged; don’t force them on everyone.

10.6 Measuring What Matters

You can’t improve what you don’t measure. Here’s what to track:

Process Metrics:

  • AI consultation rate: What percentage of patients consulted AI before their visit? Track monthly. This tells you how prevalent AI use is in your population.
  • Documentation compliance: What percentage of visits include AI documentation when AI was discussed? Audit this quarterly. It tells you whether your workflow is actually happening.
  • AI source tracking: Which AI are patients using? ChatGPT? Google? Specific symptom checkers? This helps you understand what you’re dealing with.

Outcome Metrics:

  • AI accuracy rate: When patients consulted AI, how often was the AI assessment accurate, partially accurate, or wrong? This gives you calibration for how much to trust patient AI research.
  • AI-influenced delays: How often did AI cause patients to delay appropriate care? This is your harm metric.
  • AI-influenced medication issues: How often did AI lead to inappropriate medication changes? Another harm metric.

Satisfaction Metrics:

  • Patient feedback: Do patients feel their AI research was respected? Add a question to your satisfaction surveys.
  • Time impact: Are AI-integration conversations adding time or saving it? Track subjectively for the first month.

Don’t over-engineer this. A simple tally of AI-confirmed vs. AI-corrected vs. AI-delayed cases gives you most of what you need.

Clinical Scenario: Putting It All Together

Let’s walk through a complete encounter using everything from this curriculum.

Mrs. Patterson, 56, presents for evaluation of persistent fatigue. Your MA has documented: “Patient reports researching symptoms with ChatGPT before visit.”

Your Opening:

You: “I see you did some research before coming in. What did ChatGPT tell you?”

Patient: “It said my fatigue could be thyroid, anemia, depression, or sleep apnea. It recommended getting bloodwork and maybe a sleep study.”

Your Assessment of AI:

AI gave her a reasonable differential. Those are all appropriate considerations for fatigue in a 56-year-old woman. This is AI working well—education and hypothesis generation.

Your Response:

You: “That’s actually a solid differential. Those are exactly the things I’d be considering. Let me examine you and see if I can narrow down which ones are most likely for you specifically.”

Your Examination:

You notice she looks pale. Conjunctivae are pale. Heart rate is 96, slightly elevated. She has a soft systolic murmur you don’t recall from previous visits. Her thyroid is normal. She mentions her periods have been heavier lately.

Your Integration:

You: “ChatGPT gave you good possibilities. Here’s what my exam adds: you look pale to me—your inner eyelids are lighter than they should be. Your heart rate is slightly fast, and I’m hearing a soft murmur that wasn’t there before. Those are signs of anemia that AI couldn’t detect remotely. Combined with your heavier periods, I’m betting on iron deficiency anemia as the main cause.”

Patient: “So it’s not my thyroid?”

You: “We’ll check thyroid too—it’s on the list and cheap to test. But the physical findings point toward anemia first. AI generated the right hypotheses; my exam tells us which one is most likely for you. That’s the combination working well.”

Your Teaching Moment:

You: “You used AI exactly right here. You researched, got informed, generated questions, and came in for evaluation. AI couldn’t see your pale conjunctivae or hear your heart, but it gave you a good framework. Keep doing this—AI for preparation, me for confirmation.”

Your Documentation:

Patient reports pre-visit AI consultation (ChatGPT) regarding fatigue. AI appropriately generated differential including thyroid dysfunction, anemia, depression, and sleep apnea. Physical examination reveals pallor, pale conjunctivae, resting tachycardia (96), and new soft systolic flow murmur—findings consistent with anemia not detectable by AI. Patient reports menorrhagia. Clinical impression: most likely iron deficiency anemia secondary to menstrual blood loss. Labs ordered: CBC, iron studies, ferritin, TSH, CMP. Will treat based on results. Patient educated that AI differential was appropriate; physical examination narrowed diagnostic probability. Discussed appropriate integration of AI research with clinical evaluation.

Total additional time: Maybe 90 seconds. Workflow followed. Documentation complete. Patient educated. Trust built.

Implementation Timeline

Week 1: Foundation

  • Add AI question to MA intake script
  • Create documentation shortcuts in EHR
  • Practice the opening question until automatic
  • Tell your staff what you’re doing and why

Week 2: Consistency

  • Ask every patient (no exceptions this week)
  • Use documentation templates for every AI encounter
  • Notice what’s working and what’s awkward
  • Adjust language to fit your natural style

Week 3: Refinement

  • Start narrating exam findings explicitly
  • Add teaching moments for appropriate patients
  • Begin tracking basic metrics (even just tick marks)
  • Create or adapt patient handouts

Week 4: Expansion

  • Train any staff you missed
  • Review first month’s patterns
  • Identify common AI errors in your patient population
  • Adjust workflows based on what you’ve learned

Month 2-3: Optimization

  • Refine templates based on actual use
  • Develop specialty-specific scenarios
  • Share workflows with colleagues
  • Establish regular metric review

Month 4+: Maintenance

  • Quarterly audit of documentation compliance
  • Annual review of AI landscape and workflow updates
  • Continuous improvement based on emerging patterns

Common Implementation Pitfalls

Pitfall 1: Asking the question but not doing anything with the answer

Patient tells you what AI said. You say “okay” and move on. You’ve gained nothing and documented nothing.

Fix: Commit to one response for every AI disclosure—acknowledge, integrate, or correct.

Pitfall 2: Inconsistent documentation

Some visits documented, some not. No pattern. Medicolegally vulnerable and clinically useless.

Fix: Use smart phrases. Make documentation easier than not documenting.

Pitfall 3: Forgetting to train staff

You’re asking about AI. Your MA isn’t. Mixed signals to patients.

Fix: 15-minute staff training session. Provide scripts. Follow up in a week.

Pitfall 4: Over-engineering metrics

Elaborate tracking system that no one maintains. Data collection becomes burden.

Fix: Start with three simple metrics. Add complexity only if needed.

Pitfall 5: Trying to change everything at once

New question, new documentation, new education, new workflow—all on Monday. Overwhelm. Failure.

Fix: Staged implementation. Master the opening question before adding documentation templates.

Key Takeaways

Final Remarks

So here we are. Ten modules. From “Why Fighting This Is Pointless” to “Implementation & Documentation.” From understanding the reality of AI in your waiting room to building the workflows that make integration automatic.

If you’ve read all of this—really read it, not skimmed it—you now have something most physicians don’t: a framework for practicing medicine in the AI age.

You understand the liability asymmetry (you pay malpractice insurance; OpenAI doesn’t). You can articulate the sensing gap (10 billion neurons versus zero). You know when to validate AI accuracy and when to correct AI errors. You can explain the velociraptor test and Intelligent Humility in language patients understand. You have scripts for every scenario, templates for every documentation need, and a workflow that makes this sustainable.

But frameworks don’t matter if you don’t use them.

So here’s my challenge: tomorrow, ask every patient whether they consulted AI before coming in. Not some patients. Every patient. For one week.

See what happens. Notice what they tell you. Feel how the conversations change when AI research becomes visible instead of hidden. Watch how patients respond when you validate their effort rather than dismiss their research.

Then document it. Use the templates. Make it routine.

Within a month, this will be how you practice. Not something you’re trying. Not something you’re implementing. Just… how you practice.

That’s the goal. Not to make AI integration a special effort, but to make it invisible—baked so deeply into your workflow that you can’t imagine practicing any other way.

The patients are already using AI. They’re going to keep using AI. The only question is whether you’re going to work with that reality or against it.

I know which one I’m choosing.

I hope you’ll join me.