Michael McHale , 2025-05-07 07:30:00
MPS said that AI must be integrated in health systems in a ‘useful and safe’ way for both patients and clinicians
Guidelines for the use of artificial intelligence in healthcare should ensure that clinicians don’t become ‘liability sinks’ where they end up being held responsible for AI-influenced decisions, even when the technology may be flawed.
The Medical Protection Society (MPS) made the warning as it responded to a consultation seeking views on the development of a national framework to drive and promote a safe and responsible approach to the use of AI in healthcare in Ireland.
In their submission, MPS said that AI must be integrated in a ‘useful and safe’ way for both patients and clinicians.
Referring to the risks of ‘liability sinks’, the organisation said that healthcare providers using AI systems should have product liability which covers loss to a patient from an incorrect or harmful AI recommendation.
Alternatively, they advise that health services using the technology should ensure that their contract with the AI company includes an indemnity or loss-sharing mechanism in cases where a patient alleges harm by an AI recommendation implemented by a clinician and where the clinician is subsequently held liable.

Prof Gozie Offiah, MPS Foundation chair
The organisation added that, to realise the ‘incredible opportunities’ for improvements in patient care posed by AI, frontline healthcare professionals need to be confident in its use and safety and not wary about how it may impact their decision-making and patient care.
“Enabling greater confidence in AI among clinicians is vital if the potential benefits of AI are to be unlocked for patients,” said chair of the MPS Foundation Prof Gozie Offiah.
MPS, which represents the interests of over 16,000 healthcare professionals in Ireland, made seven key recommendations in its submission to the Health Information and Quality Authority:
- Healthcare providers, clinicians and AI providers should be encouraged to ensure ongoing monitoring and risk assessment on the use of AI tools in order to ensure patient safety.
- All aspects of the framework should be developed in the context of and complement all relevant regulatory guidance. This should include Medical Council guidance, medical device regulation where appropriate, and data protection regulations.
- Clinicians should be provided with and ask for training on the AI tools they are expected to use. This training should cover the AI tool’s scope, limitations and decision thresholds, as well as how the model was trained and how it reaches its outputs. As part of this, clinicians should aim to be aware of the data on which the tool relies and be aware of potential bias.
- Clinicians should only use AI tools within their existing expertise. If there are specific cases where a clinician’s knowledge is limited, clinicians should seek the advice of a human colleague who understands the area well and can oversee the AI tool rather than rely on the AI tool to fill their knowledge gap.
- Clinicians should regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. Clinicians should also feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient.
- AI developers and clinicians should engage with each other wherever possible to ensure that AI tools are user-focused and fit for purpose for their intended contexts. This should apply not just during the development of AI tools but also in their ongoing upkeep and improvement.
- Clarity around the liability of AI providers will be needed, particularly in relation to AI systems which make recommendations. The framework should consider what steps are needed to reduce the prospect of clinicians becoming ‘liability sinks’, where they end up absorbing liability for AI-influenced decisions, even when the AI system itself may be flawed.