In partnership with

Hey Health Techies!
I didn’t set out for this to be an AI newsletter (and still don’t intend it to be). But since the beginning of the year, AI in healthcare hasn’t just been advancing. It’s accelerating.
If you spend any time online, you probably saw the viral essay last week titled “Something Big Is Happening.” It captured a feeling many of us haven’t been able to articulate — that we may have crossed some invisible threshold.
But here’s what keeps nagging at me: in healthcare you’re sort of darned if you do, darned if you don’t at the moment when it comes to AI adoption. If you don’t, you’re somehow falling behind, but if you do, you face a stigma that I think we all know is there, but I don’t see being discussed all that often.
🧠 Clinicians, AI, and the perception problem
Artificial intelligence is no longer a futuristic concept in healthcare — it’s already in the exam room, inbox, operating room, and chart. In fact I have a small favor to ask, I’d love to get a pulse on just how many of you are using AI day to day. Would you take a few seconds to answer the poll at the bottom of this newsletter? Thanks so much 🙏🏽
But while adoption is rising quickly, acceptance is far more complicated. The biggest barrier right now isn’t capability. It’s trust and perception.
One of the best ways to understand what’s happening is through a classic framework from innovation science: the adoption curve.
If you’ve ever seen the diffusion of innovations theory, you know new technologies spread through populations in predictable stages:
Innovators → Early adopters → Early majority → Late majority → Laggards
Healthcare is currently somewhere between early adopters and the early majority for AI, but unlike consumer tech like the latest iPhone, medicine can’t just ask “Does this work?”
It has to also ask “Is this safe? ethical? defensible? reimbursable?”…and a slew of other questions that can dramatically slows movement across the curve.
Right now, the clinicians using AI most confidently tend to be:
Tech-forward physicians
Academic centers
Leaders with institutional support
People personally experimenting with tools
Meanwhile, many others are watching carefully from the sidelines.
Not because they’re anti-technology, but because the stakes are higher and because it turns out that there may be other factors at play.
A study published back in August explored how physicians view the use of decision-making generative AI tools by other phsicians and the results may not surprise you, but instead might solidify things you’ve felt about the healthcare industry for a long time.
Researchers ran a randomized experiment in which practicing clinicians evaluated fictional physicians across three scenarios:
A physician using no AI
A physician using AI as a primary decision-making tool
A physician using AI as a verification or second-opinion tool
Physicians who relied on AI as their primary decision maker were rated significantly lower in clinical skill, competence, and overall care quality compared with those who used no AI at all. Even framing AI as a “verification” tool only partially repaired the damage. Perceptions improved, but still didn’t match the ratings of physicians who avoided AI entirely.
At the same time, participants acknowledged that AI can improve accuracy which suggests a disconnect between beliefs about usefulness and even patient outcomes and judgments about competence.
Healthcare has long rewarded decisiveness, expertise, and independent judgment. Seeking help (even from a powerful tool) can be interpreted as weakness rather than wisdom. Which tells you everything you need to know about what it’s like to work in healthcare to be honest.
No one should be put in a position where they’re afraid to adopt technology just because of what others will think of them. If the technology is flawed, erroneous, or otherwise unsafe, that’s a different story, but if simply using it can cast doubt in the minds of your peers about your competence, that’s certainly enough to keep many from adopting it into their practice.
That social penalty could become a major barrier to adoption not because the technology isn’t ready, but because the culture isn’t.
We’ve all heard it a million times at this point: AI won’t replace clinicians, but clinicians who know how to use AI will move ahead of those that don’t. Right now, we’re watching clinicians move across the adoption curve in real time, as AI use in practice is no longer a question of if but when and to what extent.
With that in mind, I challenge you to map out where you think you are on the curve. Early? Still cautious? Where would you like to be? What would make you feel comfortable in taking that next step? Checking in with yourself on these questions regularly will hopefully help guide some of your actions and choices as we navigate this new frontier.
Partnered with Test Double
Symptoms to Sources: Biggest healthcare software challenges in 2026

Why do tech initiatives fail? When you treat symptoms instead of root causes, it’s hard to get to lasting success. Test Double shares initial research in this healthcare software challenges report. You can also add your voice by completing a short survey.
📰 Weekly Wrap-up
AI company Anterior has raised $40 million to reduce back-office burden for health plans
Interesting piece on why Anthropic thinks its tech is a critical missing piece in EHRs
📌 Job Board
Don’t miss these open roles 👀
Partner Success - Account Manager - Sayvant Health
Program Manager, Roadmap Planning - Cohere Health
Claim Operations Manager - Lyra Health
and more!
Until next time,
Lauren


