Katie Palmer , 2025-04-30 08:30:00
It’s been a year since the federal government clarified that technology tools used in health care shouldn’t discriminate against patients based on protected traits such as age, sex, race, and disability. A lot has changed in that time, including a new president in the White House. But as the May 1 deadline approaches this week, health systems face significant clinical and political uncertainty over how to comply with the rule.
The lack of clarity is further delaying and potentially disrupting an already complex effort by health systems and technology vendors to avoid discrimination from artificial intelligence and other clinical tools used in making decisions about patient care.
The nondiscrimination provision of the Affordable Care Act, called Section 1557, has always prohibited discrimination on the basis of traits like race, sex, age, and disability. But last year the Biden administration issued a final rule clarifying that patient care decision support tools, including artificial intelligence algorithms, fall under that banner. As of May 1, federally-funded health systems will have to show they’ve worked to identify tools that use protected traits — and to mitigate the risk of discrimination from their use.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in