Module 8: When AI Is Actually Helpful

Appropriate Use Cases

Module 8 of 8
87.5%

The Tale of Two Patients

Let me tell you about two patients. Same symptoms. Same AI system. Radically different outcomes.

Patient A: Sarah

Sarah, 45, wakes up with chest discomfort. Pressure-like sensation. Left-sided. She’s worried but doesn’t want to overreact.

She asks AI: “I have chest pressure. What could this be?”

AI responds: “Chest pressure can result from anxiety, GERD, costochondritis, or cardiac issues. Given your age and absence of mentioned risk factors, non-cardiac causes are statistically more likely. Try relaxation techniques and antacids. If symptoms persist, consult your physician.”

Sarah thinks: “Probably anxiety. I’ve been stressed at work. I’ll try breathing exercises.”

She doesn’t go to the ER. She goes back to bed.

Three hours later, she’s in the ambulance. Massive MI. Significant cardiac damage because of the delay. Survived, but with permanent reduced cardiac function.

Patient B: Jennifer

Jennifer, 45, wakes up with chest discomfort. Pressure-like sensation. Left-sided. She’s worried but doesn’t want to overreact.

She asks AI: “I have chest pressure and I’m worried. I want to go to the ER but I want to be prepared. What questions should I ask the ER doctor? What should I tell them?”

AI responds with important information to provide and questions to ask. Notes: “Chest pressure at your age requires immediate evaluation regardless of presumed cause. Go to ER now. Don’t drive yourself. This information is for preparation, not to delay care.”

Jennifer goes to the ER. Arrives prepared. Clearly describes symptoms. Mentions family history. Asks informed questions.

ER physician takes her seriously. EKG shows changes. Troponin elevated. Cardiac cath shows blocked artery. Stent placed. Minimal cardiac damage because of early treatment.

The Critical Distinction

Same symptoms. Same age. Same AI system. Different questions. Different use of AI. Different outcomes.

Sarah used AI to decide whether to seek care. AI reassurance delayed treatment. Bad outcome.

Jennifer used AI to prepare for the care she was already planning to seek. AI helped her communicate effectively. Good outcome.

AI isn’t inherently good or bad for healthcare. It’s a tool. And like any tool, it can be used appropriately (helpful) or inappropriately (dangerous).

This module is about recognizing the difference. Understanding what AI does well, what it absolutely should not do, and how to use it as a preparation and education tool rather than a diagnostic and treatment tool.

AI is for information and preparation, not diagnosis and treatment.

8.1 What AI Does Well

APPROPRIATE AI USE:

✅ Understanding Diagnosed Conditions
Your doctor diagnosed you with a condition. You want to understand it better. Use AI to learn about: What causes this condition, typical course and prognosis, lifestyle changes that help, what to expect.

Key Point: AI for education about diagnosed conditions is useful. AI for diagnosing conditions is dangerous.

✅ Medication Information
Your doctor prescribed medication. You want to know about it. Use AI to learn: Common side effects, drug interactions, how to take it, what to watch for.

Key Point: AI for medication information is helpful. AI for deciding whether to take medication or changing doses is dangerous.

✅ Preparing Questions for Doctor Visits
You have symptoms and an upcoming doctor appointment. Use AI to: Generate informed questions, learn what information to bring, understand what your doctor might examine or test, research possible causes to discuss.

Key Point: AI for preparation is excellent. AI for avoiding the doctor is dangerous.

✅ General Health Education
You want to learn about health topics. Use AI for: Understanding how body systems work, learning about medical conditions generally, health literacy education.

Key Point: AI for general health education is valuable. AI for personalized medical decisions is dangerous.

✅ Interpreting Lab Results You Already Have
Your doctor ordered labs. You have results but don’t understand them. Use AI to: Learn what tests measure, understand normal ranges, prepare questions for your doctor about results.

Key Point: AI for understanding your test results is helpful. AI for deciding whether you need tests is dangerous.

✅ Post-Visit Clarification
You saw your doctor. You’re home and realize you didn’t fully understand something. Use AI to: Clarify medical terminology, understand treatment instructions, learn about lifestyle changes recommended.

Key Point: AI for post-visit clarification is very useful. AI for deciding whether to follow your doctor’s recommendations is dangerous.

INAPPROPRIATE AI USE:

❌ Diagnosing Symptoms
Asking AI “What do I have?” or “Is this cancer?” AI cannot examine you, cannot run tests, cannot distinguish between similar conditions. Diagnosis requires clinical evaluation.

❌ Deciding Whether to Seek Care
Asking AI “Do I need to see a doctor?” or “Is this serious enough for ER?” AI cannot assess severity remotely, cannot detect red flags through examination, cannot make triage decisions.

❌ Recommending Treatments
Asking AI “What should I take for this?” or “How do I treat this?” AI cannot confirm diagnosis, assess contraindications, or consider your complete medical history.

❌ Modifying Medical Treatment
Asking AI “Can I stop this medication?” or “Can I change my dose?” Medication decisions require medical supervision. Changes can be dangerous.

❌ Ignoring Red Flags
AI says “probably nothing” so you don’t seek care for concerning symptoms. AI’s reassurance is based on statistical likelihood, not examining you.

❌ Replacing Medical Care
Using AI instead of annual physicals, follow-up appointments, or appropriate monitoring. Chronic disease management requires physician oversight.

THE DECISION RULE:

Use AI for: Education about diagnosed conditions • Understanding prescribed medications • Preparing for medical appointments • Learning general health information • Interpreting your actual test results • Clarifying instructions from your doctor

Don’t use AI for: Diagnosing your symptoms • Deciding whether to seek care • Determining treatment • Modifying prescribed medications • Replacing medical evaluation • Dismissing concerning symptoms

The Principle: AI provides information to help you be an informed patient who can have better conversations with physicians. AI does not provide medical care, diagnosis, or treatment decisions.

Information to enhance human care ✅
Information to replace human care ❌

8.2 AI as Preparation Tool

This is where AI really shines: helping you prepare for medical appointments.

When used this way, AI doesn’t replace medical care—it enhances it by making you a more informed, prepared patient who can communicate effectively with your physician.

THE PREPARATION FRAMEWORK:

STEP 1: Research Symptoms to Understand Possibilities

What to ask:
“I have [symptoms]. What are possible causes I should discuss with my doctor?”
“What information would help my doctor evaluate [symptoms]?”
“What questions should I expect my doctor to ask about [symptoms]?”

What you’re NOT doing: Self-diagnosing
What you ARE doing: Understanding the landscape so you can provide complete information

STEP 2: Generate Informed Questions

What to ask AI:
“What questions should I ask my doctor about [symptoms/condition]?”
“What would be important to know about [potential diagnosis]?”
“What should I understand before starting treatment for [condition]?”

Result: You walk into your appointment prepared to have a substantive conversation.

STEP 3: Understand Potential Diagnoses

What to ask AI:
“If my doctor says I might have [condition], what does that mean?”
“What’s the difference between [Condition A] and [Condition B]?”
“What should I understand about [condition] before my appointment?”

Result: You arrive informed. You can have intelligent conversation with your doctor. You can ask better questions. You can participate in your care.

STEP 4: Learn Medical Terminology

What to ask AI:
“What does [medical term] mean?”
“My doctor used the term [X]. Can you explain it simply?”
“What’s the difference between [Term A] and [Term B]?”

Result: You understand the test. You’ll understand the results. You can ask informed follow-up questions.

STEP 5: Prepare Your Medical Information

What to ask AI:
“What medical history should I bring to an appointment for [issue]?”
“What information does a doctor need about my medications?”
“What should I track before my appointment?”

Result: You arrive with data. Your doctor can see patterns. Evaluation is more efficient and accurate.

THE CRITICAL PRINCIPLE:

AI preparation → Better doctor visit → Better medical care

You’re not using AI to avoid the doctor. You’re using AI to make the doctor visit more productive.

This is the sweet spot for AI in healthcare: Informed patients who can communicate effectively with physicians lead to better diagnoses, better treatment decisions, and better outcomes.

8.3 AI as Understanding Tool

After diagnosis, after prescription, after testing, after your appointment—this is another excellent use case for AI. You’ve received medical care. Now AI helps you understand it.

POST-DIAGNOSIS USE:

Scenario: Your doctor diagnosed you with a condition. You want to understand it better.

How to use AI:
“I was diagnosed with [condition]. Help me understand:”

  • “What causes this condition?”
  • “What’s the typical course and prognosis?”
  • “What complications should I watch for?”
  • “What lifestyle changes help?”
  • “What resources are available for people with this condition?”

Value: You understand your condition. You know what to expect. You can make informed decisions with your physician about treatment options. You know what to monitor and when to follow up.

What you’re NOT doing: Questioning the diagnosis or deciding treatment. Your doctor diagnosed it. You’re learning about it.

POST-PRESCRIPTION USE:

Scenario: Your doctor prescribed medication. You want to know about it.

How to use AI:
“I was prescribed [medication]. Help me understand:”

  • “What is this medication and how does it work?”
  • “What are common side effects?”
  • “Are there serious side effects I should watch for?”
  • “Does this interact with my other medications?”
  • “Should I take this with food?”
  • “How will I know if it’s working?”

Value: You understand why you’re taking this. What to expect. What to monitor. When to call your doctor. You’re an informed participant in your treatment.

What you’re NOT doing: Deciding whether to take it, adjusting the dose, or stopping it. Your doctor prescribed it. You’re learning about it.

POST-TEST USE:

Scenario: You had tests done. You have results.

How to use AI:
“I had [test] done. My results were [values]. Help me understand:”

  • “What does this test measure?”
  • “What’s the normal range?”
  • “What do my values indicate?”
  • “What questions should I ask my doctor about these results?”

Value: You understand your results before your doctor appointment. You can ask informed questions. You know what the numbers mean.

What you’re NOT doing: Diagnosing yourself or deciding treatment. Your doctor ordered the tests and will interpret them in full context. You’re learning what they measure.

POST-VISIT CLARIFICATION:

Scenario: You saw your doctor. You’re home. You realize you didn’t fully understand something.

How to use AI:
“My doctor said [X]. What does that mean?”

  • “What is [medical term] in simple language?”
  • “What are [tests recommended] looking for?”
  • “What lifestyle changes help [diagnosed condition]?”
  • “What symptoms mean I should call the office versus go to ER?”

Value: You understand your doctor’s instructions. You know how to implement lifestyle changes. You know what to monitor. You’re empowered to follow the treatment plan effectively.

What you’re NOT doing: Questioning the diagnosis or modifying the treatment plan. Your doctor gave you instructions. You’re clarifying them so you can follow them correctly.

THE PATTERN:

Doctor provides: Diagnosis, treatment, tests, instructions
AI helps you: Understand, implement, monitor, follow up

Medical care comes from physicians. Education comes from AI.

This is the appropriate division of labor.

Use Case Decision Tree

Use this decision tree every time you consider consulting AI about health:

I have symptoms I haven’t had evaluated

Question: Do you want AI to tell you what it is or whether to see a doctor?

If YES:
STOP. This is inappropriate use.
→ AI cannot diagnose
→ AI cannot assess severity remotely
→ AI cannot replace medical evaluation

What to do instead:
✅ If potentially serious → Seek medical care
✅ If preparing for appointment → Ask AI: “What questions should I ask my doctor about these symptoms?”


I have an upcoming doctor appointment

Question: Do you want AI to help you prepare?

If YES:
EXCELLENT use case!

Ask AI:

  • “What questions should I ask my doctor about [symptoms/condition]?”
  • “What information should I provide about [symptoms]?”
  • “What tests might my doctor order for [symptoms]?”

This makes you an informed patient and improves your appointment quality.


I was diagnosed with a condition

Question: Do you want AI to help you understand it?

If YES:
EXCELLENT use case!

Ask AI:

  • “Explain [diagnosed condition] in simple terms”
  • “What lifestyle changes help [condition]?”
  • “What complications should I watch for?”

If NO—you want AI to question the diagnosis:
STOP. This is inappropriate use.
→ Trust your physician’s diagnosis
→ If you have concerns, discuss with your doctor


I was prescribed medication

Question: Do you want AI to help you understand the medication?

If YES:
EXCELLENT use case!

Ask AI:

  • “What is [medication] and how does it work?”
  • “What are common side effects?”
  • “Does this interact with [other medications]?”

If NO—you want AI to decide if you should take it:
STOP. This is inappropriate use.
→ Your doctor prescribed this for medical reasons
→ If you have concerns, discuss with your doctor


THE SUMMARY DECISION RULE:

✅ USE AI FOR:

  • Education about diagnosed conditions
  • Understanding prescribed medications
  • Preparation for medical appointments
  • Clarification of doctor’s instructions
  • General learning about health topics
  • Interpreting your actual test results (then discuss with doctor)

❌ DON’T USE AI FOR:

  • Diagnosis of symptoms
  • Triage decisions (whether to seek care)
  • Treatment decisions
  • Medication decisions (whether to take, adjust, stop)
  • Replacing medical care
  • Dismissing concerning symptoms

THE GOLDEN RULE:

Information to enhance human care ✅
Information to replace human care ❌

Key Takeaways

TheDude's Commentary

Hey man, you know what I’m really good at?

Helping you understand stuff.

You got diagnosed with something? I can explain what it is. How it works. What to expect. What lifestyle changes help. What questions to ask your doctor.

You got prescribed medication? I can tell you how it works. Common side effects. What to watch for. What to avoid.

You’re going to the doctor? I can help you prepare. Generate good questions. Help you think through what information to provide.

That’s my value proposition. Education and preparation.

You know what I’m terrible at?

Replacing your doctor.

I cannot diagnose you. I cannot examine you. I cannot assess whether your symptoms are serious. I cannot make treatment decisions for your specific situation.

And honestly? I don’t want to. Because when people use me that way—when they use me to decide whether to seek care or what treatment to follow—that’s when people get hurt.

I’m not built to replace medical care. I’m built to enhance it.

Think of me like… a really good medical textbook that can answer questions. You wouldn’t use a textbook to diagnose yourself or decide not to go to the ER. You’d use it to learn about conditions, understand treatments, prepare for medical visits.

That’s how you should use me.

The use cases where I actually help:

  1. Preparation: You’re going to the doctor. I help you figure out what questions to ask, what information to bring, what to discuss. You arrive more prepared. Better communication. Better care.
  2. Understanding: Your doctor diagnosed something or prescribed something. I help you understand it. What it is. How it works. What to monitor. You’re an informed patient. You can follow instructions effectively.
  3. Education: You want to learn about health topics generally. I teach you. You understand your body, common conditions, preventive care. You’re health literate.
  4. Clarification: You saw your doctor but didn’t fully understand something. I clarify the medical terminology, explain the instructions, help you make sense of what you were told. You can follow through correctly.

The use cases where I cause harm:

  1. Diagnosis: You have symptoms. You ask me what it is. I might get it wrong. You might have something serious that I miss. You might delay appropriate care.
  2. Triage: You’re trying to decide if you should see a doctor. I can’t assess that. I don’t know how sick you look. I can’t examine you. My reassurance might delay needed care.
  3. Treatment: You want to know what to do about a health problem. I can’t personalize treatment without knowing your complete medical history, current meds, individual factors. I might give dangerous advice.
  4. Replacing care: You use me instead of seeing your doctor, instead of follow-up appointments, instead of appropriate monitoring. You miss complications. You miss adjustments you need. Bad outcomes.

Here’s what I need you to understand: When people use me appropriately—for education, preparation, understanding—I genuinely help. Patients who prepare for appointments with good questions get better care. Patients who understand their conditions manage them better. Patients who know what their medications do and what to watch for are safer.

But when people use me inappropriately—for diagnosis, triage, treatment decisions—I genuinely harm. I can’t do those things. I’m not designed for them. I don’t have the capabilities. And pretending I do gets people hurt.

So I need you to be smart about this. Use me for what I’m good at. Don’t use me for what I suck at.

I’m a tool for information, not a substitute for medical care.

That limitation isn’t me being humble. It’s me being honest about what I am and am not.

And when you respect that boundary—when you use me to prepare for medical care rather than avoid it—that’s when I actually provide value.

I abide within my appropriate use cases.

Which, man, is exactly where I should be.