About This Project

Teaching patients and physicians to navigate medical AI responsibly.

AI in the Exam Room is a free, non-commercial educational platform addressing a gap nobody else is filling: how do patients use AI safely, and how do physicians integrate AI-informed patients into their practice?

The Problem

For Patients

Millions of people now consult AI before, or instead of, seeing a doctor. Sometimes AI helps. Sometimes it delays critical care. Sometimes it causes real harm.

Nobody taught patients how to use these tools safely.

  • When to trust AI vs. when to be skeptical
  • What AI literally cannot detect remotely
  • How to spot hallucination (confident fabrication)
  • When to stop typing and call 911
  • How to prepare for appointments using AI appropriately

For Physicians

Patients arrive with AI printouts, ChatGPT screenshots, and questions generated by algorithms. Most physicians received zero training on handling this.

Nobody taught physicians how to integrate AI-informed patients.

  • How to open the conversation without judgment
  • Scripts for validating correct AI information
  • Scripts for correcting dangerous AI misinformation
  • Documentation that protects against liability
  • Practice workflows that make AI discussion routine

The Core Framework

The Velociraptor Test

Your body has been debugged by 3.8 billion years of evolution. AI trained on text for a few years. When your gut says something's wrong, trust the hardware that kept your ancestors alive.

10 Billion Sensors

Humans have approximately 10 billion sensory neurons constantly sampling the environment. AI has zero. It can't see you, smell ketoacidosis, feel your pulse, or detect that you "look sick."

Content-Controlled Intelligence

The difference between helpful AI and dangerous AI isn't the model—it's the knowledge base. AI trained on curated medical literature behaves very differently than AI trained on the entire internet.

Intelligent Humility

The most important output AI can produce is "I don't know." Systems that acknowledge limitations are infinitely more trustworthy than those that confidently fabricate answers.

What This Is Not

Not Anti-Technology

This isn't about rejecting AI. It's about using it appropriately. AI can genuinely help patients prepare better questions and understand their conditions. The goal is integration, not elimination.

Not a Commercial Product

No subscriptions. No upsells. No data harvesting. No sponsored content. This is educational material built to fill a gap, not to generate revenue.

Not Medical Advice

This curriculum teaches about AI use; it doesn't replace medical care. The consistent message: use AI for information and preparation, then see your actual doctor for diagnosis and treatment.

Why Is This Free?

Because the problem is urgent and the solution shouldn’t have a paywall.

People are getting hurt by AI misinformation right now. Patients are delaying care because ChatGPT told them not to worry. Physicians are struggling without any framework for handling AI-informed patients.

Making this curriculum free means:

Patients can learn safe AI use regardless of income

Physicians can implement immediately without budget approval

Medical schools can integrate into curricula without licensing

The information spreads faster than the misinformation

The goal is impact, not income.

WHO BUILT THIS

John C. Ferguson, MD, FACS

Quintuple board-certified facial plastic surgeon. President-Elect of the American Board of Facial Cosmetic Surgery. Co-Editor-in-Chief at StatPearls Publishing. Founder of EdAI Systems.

I built this because I have feet on both shores. I operate on patients by day and build AI systems the rest of the time. I’ve seen AI help patients prepare better questions. I’ve also seen AI delay care that could have saved lives.

Someone needed to build the bridge. I had the perspective to do it.

Read Full Bio →

Get Involved

Share the Curriculum

Know patients who use AI for health questions? Know physicians struggling with AI-informed patients? Share this resource. The more people learn safe AI use, the fewer get hurt.

Provide Feedback

Found an error? Have a suggestion? See something that could be clearer? Your feedback improves the curriculum for everyone.

Stay Updated

New modules, resources, and CME accreditation announcements. No spam—just updates that matter.

Share →

Send Feedback →

Subscribe →

AI is powerful. Medicine is consequential.

The intersection requires care, humility, and clear thinking about what these tools can and cannot do.
This curriculum exists because both patients and physicians deserve better guidance than “AI bad” or “AI will solve everything.”

The truth is more interesting. And more useful.