When AI chatbots answer health queries with what we want to hear

When I got troubling medical news, I drank in the elixir of confirmation bias

Brad Dell avatar

by Brad Dell |

Share this article:

Share article via email
exhaustion, shame, Banner for

I’m no stranger to the dread of the metaphorical waiting room, the in-between space stretching between questions and answers. But familiar as the space is, I haven’t mastered the discipline of sitting still. I’m an anxious creature, and mystery makes me crave control. Of the many pills I take, the hardest to swallow is that in this universe, we have little control.

On a Friday weeks ago, my wife and I received a message saying that our first attempt at in vitro fertilization (IVF) — made necessary by my cystic fibrosis — “very likely” resulted in a biochemical pregnancy, which doesn’t produce a baby.

“Very likely” tormented me. It wasn’t “definitely.” A definite no would’ve been better, I think. Confirmation wouldn’t be possible until Monday. That weekend was a waiting room.

I’d grown attached to this embryo, to dreams of a child. It’s one thing not to feel control over my body’s ability to flourish, and another to feel helpless while waiting on answers for this very, very little one. I couldn’t be still. So after a hard cry and an outraged prayer, I snatched up my phone and got to work.

Recommended Reading
A doctor gestures with one hand while talking with a patient sitting on an examining table.

More resources about fertility needed for men with CF

When illusions of control will do

My hunger for control drives me to research relentlessly. Confronted by medical mystery, I become a man possessed by the hunt. If I can find just one plausible theory of what’s going on, I might sleep at night.

The proliferation of large language models (LLM), popularly known as AI chatbots, is a researcher’s fantasy realized, but fantasy in the hands of an obsessive is rarely a thoroughly good thing. I easily persuaded my artificial friends Gemini and Claude to hand over the answers I desired. No, not simply “the answers,” but the answers “I desired.” There’s a distinction there.

Any researcher worth their salt knows the dangers of confirmation bias, the twisting of research to squeeze results into a desired frame or outcome. We see what we want to believe. I wanted to believe this embryo had a better chance than the IVF specialists thought.

AI chatbots are polite. If you implicitly ask it to tell you slant, or to phrase hopelessness with hope, it will. And it did. Every time the bot echoed my specialist’s bleak report, I responded, “It can’t be that bad, can it? Have you considered …?” The more I successfully pressured the bot to give me what I wanted, the more my hope swelled.

I finally shut off my screens to feel in control: “The embryo will be fine,” I thought. “Bad test results like the ones we got happen all the time, and if you consider this tiny variable and that other one too, well, the picture isn’t at all one of a ‘very likely’ failure. Surely the doctor only wanted to prepare us in the small chance that things didn’t go as hoped.”

In grief, we can rationalize anything, can’t we?

Reasons to tread carefully with AI

The LLM is only as effective as the prompter. If the prompter nudges the reins in one direction or another, the bot follows suit obediently. Unless we instruct the AI chatbot to defy our subjectivity, we’re at risk of it becoming an accomplice in our confirmation bias. There’s no true personhood in AI chatbots that rejects assaults on its dignity; the bot won’t naturally resist the will forced upon it.

Beyond the philosophical wrestling of whether false or embellished hope holds therapeutic value for the sufferer, this easy manipulation of AI chatbots stirs other anxieties in me. One example of many: In what ways will it reinforce the opinion of those who desire for it to support their rejection of a doctor’s recommendations?

Sure, if you guide it to support dropping your medication entirely to instead use snake oil, it’ll tell you not to make changes without a doctor’s approval — but almost with a wink, as it lists out mouse studies supporting your desire. And if we see what we want to see, well, you can guess what part that person will refuse to see.

And of course there’s the obvious danger of LLM hallucinations, when an AI chatbot confidently conveys factually incorrect information.

With great power comes great responsibility, because with great power comes risk of losing control — the power can control you instead of the reverse. In the waiting room, I crave control, so I labored to control my narrative. Yet I lost my footing in reality. Finding an accomplice who won’t push back at that narrative if I don’t want it to is just another layer of illusory control.

It was a biochemical pregnancy.

The road forward is full of unknowns, but I’m trying my best to be still and wait on the things I cannot control, and throw off the compulsion to control narratives. I’ll still itch for answers, but I’m determined not to answer mystery with fantasy again.


Note: Cystic Fibrosis News Today is strictly a news and information website about the disease. It does not provide medical advice, diagnosis, or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website. The opinions expressed in this column are not those of Cystic Fibrosis News Today or its parent company, Bionews, and are intended to spark discussion about issues pertaining to cystic fibrosis.

Leave a comment

Fill in the required fields to post. Your email address will not be published.