In case you hadn’t noticed, there is a lot of excitement about artificial intelligence. AI of today is helping automate mundane tasks to make clinician lives more efficient, productive, and frictionless. AI of the immediate future is expected to enhance and augment our human abilities to tackle increasingly complex tasks. AI of the distant future — artificial general intelligence or AGI — is expected to replace us. Though it seems no industry will be untouched by its digital hand, there is particular enthusiasm about the promise of AI in healthcare.
The exact role AI will play in healthcare remains a subject of intense debate. Most agree that, at the very least, AI-enabled tools will improve healthcare delivery as an adjunct to clinician-led treatment. Big Tech companies and large healthcare incumbents are careful to clarify their intent to augment, not replace doctors. The reasons for this distinction are myriad and include concerns over privacy, security, accuracy, liability, and regulation. Others aren’t quite as circumspect. There’s a growing subset of healthcare innovators who believe AI has the potential to replace doctors. AI-enabled full body scans are purported to allow early disease detection, obviating the need for invasive, high morbidity treatments. (Some believe they’ll obviate the need for radiologists too.) Others (including noted venture capitalist Vinod Khosla) are predicting a future in which generative AI, large language models (LLMs), and natural language processing (NLP) combine to form “AI doctors” and “primary care apps.” Startups, smaller tech companies, and Silicon Valley entrepreneurs are bullish on AI’s ability to do the work of physicians.
The furious hype cycle makes it difficult to parse exactly where things stand. Separating future potential from present day reality requires an ability to cut through thick layers of fluff. But to deny the future of artificial intelligence in healthcare is to remain flat-footed and risk obsolescence. (The politically correct view is that AI won’t replace doctors, but doctors who embrace AI will replace those who don’t). AI tools are already showing promise in areas such as ambient dictation, clinical decision-making support, prior authorization, and predictive analytics. With a feared doctor shortage looming, artificial intelligence also holds the promise of increasing access to care. Healthcare innovators are enamored of the notion that tech will allow doctors to “practice at the top of their licenses.”
In my mind, “practice at the top of your license” has become more of a healthcare innovation meme than a useful concept. I practice at the top of my license every single day. The problem is the number of hoops I have to jump through to do so. Many of them are unavoidable — poorly designed EMRs, documentation requirements for billing, prior authorization/peer-to-peer calls, reams of word salad home health orders (digital and paper) requiring signatures, various administrative requirements, etc. All these take time away from patient care and contribute to frustration — few of them demonstrably make things better (despite what committees, organizations, and regulators may tell you). AI tools can ameliorate many of these tasks, but to what degree? Considering their implementation costs and ROI, are they really cost effective? Do they create their own set of non-clinical tasks (training, review, troubleshooting) that offset any time or efficiency gains?
All of this ignores the biggest issue: we need these tools because we have created an increasingly complex system. To our detriment, we have stripped away the heart of medical practice and eroded the doctor-patient relationship. These tools simply address self-inflicted wounds and place high-tech Band-Aids on low tech problems. Believing you’re going to fix the ills of medicine through AI-powered tech tools is akin to believing that using GPS in your car is going to eliminate traffic congestion. Ours is primarily an infrastructure problem, and that’s really where healthcare innovation should start. AI tools built on top of broken, convoluted, and inefficient processes aren’t technology at its most powerful. To capture its full potential, implementation of healthcare tech should occur alongside sweeping innovation in care delivery. This includes but isn’t limited to:
Seamless patient scheduling/referrals
The office visit, ASC, hospital experience
Care coordination (acute/episodic and chronic/longitudinal)
Treatment protocols (medication management, postop care, physical therapy, etc.)
Communication (asynchronous messaging, LLM chatbots, and good ol’ phone calls)
Reduction in administrative burden (prior authorization, paperwork, maintenance of certification, E&M coding requirements)
Re-design of payment models (meaningful quality metrics, blending of FFS and VBC, more collaboration between payers and providers)
Of course, the list goes on, and these are all complex topics that deserve their own posts. The point here is that the best way to innovate healthcare is to bake technology into a ground up approach (or at least an extensive reworking of the current system). I’m increasingly convinced that the push to replace doctors with AI is coming from several places:
The investment/entrepreneur/tech community still figuring out how to extract value from the healthcare system having been burned by 2021-2022 era digital health (cynical take)
Frustration with the current system’s resistance to change, inability to deliver an enjoyable experience, paternalism/incumbency, and lack of high-quality, consistent care (pessimistic take)
The recognition that healthcare has long lagged other fields in the adoption and implementation of technology and the promise and potential of current and near-future generation AI to overcome existing barriers (optimistic take)
A combination of 1-3 (realistic take)
It’s difficult to parse which way all of this will break. As the last few years have proven, healthcare has an inertia and gravitational pull that is hard to escape. There is the need to balance the art of medicine with the science of medicine, a core concept the technosphere repeatedly seems to underestimate. The argument could be made that healthcare’s biggest issue is that the human element has been de-emphasized, removed, and forgotten (something all sides bear responsibility for). An AI-future of feigned empathy, marginally better diagnostic ability, and surreptitious hallucinations may not be better than a present of strained empathy, marginally better intuitive skills, and protectionist egosyntonics. We need to bridge the gap between traditional healthcare and healthcare technology in a way that is synergistic, not adversarial.
*[Note: H/t to hip hop originals De La Soul who I blatantly ripped off for the title of this post. Their album “Art Official Intelligence: Mosaic Thump” came out in 2000 — two decades before today’s current AI hype. Many frontline healthcare workers might identify more with the title of their 1989 debut album: “3 Feet High and Rising.”]
Beautiful piece of writing, as always, Ben. Since starting my AI health startup, WellAI, 4.5 years ago, I’ve discovered that 99% of clinicians' day-to-day problems don't necessarily require AI. They require automation (workflow optimization, voicemail minimization, scheduling simplification, API setup, good ol’ Java/Xcode programming, etc.) Clinicians aren't particularly excited about AI as such. They get excited about spending less time in front of a computer screen and spending more time with patients. As innovators in digital health, we must go back to the drawing board and start simple: we must focus on minimizing clicks for clinicians, not on bombarding them with "the latest and the greatest" in AI.