Kamran Abbasi , 2025-05-15 13:16:00
At a conference in Hong Kong last year almost every presentation enthused about the potential of artificial intelligence. The conference wasn’t even about AI, the great global hope to solve every problem and save us money to boot. If you’re out of solutions for your health service or economy, sprinkle some AI magic on your strategic plan—your imagined prospects will bloom, and your numbers will add up.
AI is a runaway technology, perhaps justifiably so. Lost your faith in carbon capture as a response to the climate crisis? Don’t worry: AI assisted carbon capture is the real deal. Take Tony Blair’s word for it (doi:10.1136/bmj.r955).1 Want better cardiac imaging? AI has a use case for that (doi:10.1136/bmj.r966),2 although it’s not clear whether better imaging improves patient outcomes. Fed up with the tyranny of administration tasks, from making consultation notes to writing discharge letters? Yes, AI is for you. Many doctors seek support for those tasks (doi:10.1136/bmj.r974),3 AI probably improves productivity, and patients are quite open to its use provided their data are secure and remain private (doi:10.1136/bmj.r391).4
What we know for sure about AI is that the genie is out of the bottle—and there’s no ordering it back in unless civilisation collapses. What we probably know about AI is that we can make a good enough case for its use in “back office” tasks to reduce bureaucracy and administrative workload. AI, when used judiciously, can make our working lives more efficient, productive, and fun.
Chris Stokel-Walker explores the use of “ambient scribe” tools that help clinicians with consultations, allowing doctors to focus on talking to patients without being distracted by note taking (doi:10.1136/bmj.r663).5 The response is generally positive across a range of settings. AI tools will continue to evolve rapidly, although they come with caveats. AI is known for hallucinating or confabulating— in other words, guessing to fill gaps in the narrative that it generates. The more distant you are from the situation being summarised, the less likely you are to spot errors. There may be language barriers to overcome. Any AI tool is only as good as the information it’s trained on, and information has inherent biases. All of this means that AI might adequately capture 90-95% of a consultation, but human supervision and review remain essential.
Another potential use is in training clinicians and supporting clinical work (doi:10.1136/bmj.r822)6—for example, to improve interpretation of medical imaging. An explosion of studies details the diagnostic power of AI to identify abnormalities, although evidence is still emerging on whether higher rates of detecting abnormalities improve patient outcomes. Again, human intervention is required to consider the context and implications of any diagnostic findings reported by an AI tool. Another related concern is whether over-reliance on AI tools will de-skill clinical staff.
Improving trust
The boiled-down message is this: AI for administrative tasks? Yes. AI for diagnostics and training? Yes, but we must check any uncritical enthusiasm to better understand the impact on patient care and clinician training. AI for improving patient outcomes? To answer this, we need more and better designed studies. Just as reporting guidelines have improved the reporting of randomised trials and protocols—on which we update guidance this week (doi:10.1136/bmj-2024-081123 doi:10.1136/bmj-2024-081477 doi:10.1136/bmj.r494)789—we need better reported studies of AI tools, using guidelines such as TRIPOD-AI for prediction model studies (doi:10.1136/bmj-2023-078378).10 More robust evaluation of AI tools will improve public and professional trust in AI—as will better integration with evidence based clinical resources.
Many of the possibilities and pitfalls of AI are captured in the rapid adoption of AI therapy apps to support people’s mental health and wellbeing (doi:10.1136/bmj.r821).11 Nonetheless, the potential of AI is clear. Just consider a pandemic—where Anthony Costello argues that the UK made major strategic errors in comparison with similar countries in responding to covid-19 (doi:10.1136/bmj-2024-082463)12—and the expected heavy reliance on AI for the next pandemic response.
To make sense of these challenges, Canada has just appointed its first minister of AI and digital innovation.13 One day such ministries might rank in importance with ministries of finance and health. Let’s hope that Canada’s new ministry for the future helps make a better one.
Whatever the future holds for AI, it will still be shaped by humans. It’s on us—and on trusted information. If AI saves us, the guiding hand will be human. If AI kills us, as it might in a nuclear war (doi:10.1136/bmj.r881),14 the bloodstained hands will also be all too human. However, the quality, reliability, and trustworthiness of information that AI is trained on and relies on, and which therefore dictates its value, must not be forgotten. The magic bullet is smart AI, wise humans, and trustworthy evidence working in technological harmony.