Public health science has failed the Covid postmortem

admin
13 Min Read

Steven Phillips , 2025-05-02 08:30:00

The fifth anniversary of President Trump’s March 2020 declaration of a national Covid-19 emergency has prompted a surge of retrospective assessmentsGovernment agenciesexpert panelsthink tanks, and media outlets  all contributed to a sprawling postmortem. The goal was to draw lessons from the pandemic’s devastating toll in hopes of better preparing for future crises.

Much of this analysis is sound — calls to improve stockpiles, streamline data-sharing, communicate more clearly in a crisis, and increase public trust in government and science, are hard to argue with.

But these postmortems also reflect a troubling trend: They collectively fail to evaluate which specific policies and interventions actually worked, which didn’t, and which may have caused harm. That core question — the balance of what saved lives, what cost lives, and what was the attributable collateral economic and social damage — is still largely unanswered.

In public health and the social sciences, this is called outcome evaluation. It’s how we distinguish between good intentions and effective policy. And its near-total absence from the Covid-19 postmortem is the dog that didn’t bark. Perhaps many are trying to avoid the elephant in the room: Can we distinguish between “what the virus did to us” and “what we did to ourselves” — via the vagaries of human judgment and institutions?

During the pandemic, many interventions were rolled out quickly, with urgency and moral certainty. That was understandable. But five years later, we owe it to ourselves to ask which of those decisions delivered results and which may have made things worse. Instead, the same flawed frameworks that steered us wrong during the crisis continue to guide our understanding of it. The public health establishment’s postmortem is now using the same distorted lens that misread aspects of the pandemic in real time.

Throughout the crisis, science was frequently used to justify policy — not to interrogate it. Messaging was often inconsistent, politically attuned, and overly reliant on hypotheses and assumptions that weren’t grounded in fact. Rather than adapting hypotheses to evidence, the policy response was frequently just the reverse. It was often shaped by orthodoxy, institutional groupthink, and partisan polarization.

It’s not just wisdom of hindsight to say this. Many of these errors were obvious in the moment. Science became a rhetorical shield — “Follow the science” — when it should have been a process of continual testing, refinement, and correction. That didn’t happen often enough.

Some examples are now well known. The virus was primarily spread through airborne aerosols, not droplets, making plexiglass dividers and deep-cleaning rituals ineffective. The closure of beaches and bans on outdoor gatherings lacked scientific justification. Testing rollout was slow and chaotic. The 6-foot social distancing rule was arbitrary. Mask guidance changed multiple times and was often delivered with condescension rather than clarity. Perhaps most tragically, infected patients returned to nursing homes in the early days, leading to avoidable deaths.

Broader policy failures were even more consequential. Lockdowns, school closures, and border controls may have had some short-term utility, but in many cases, the social and economic costs far exceeded the health benefits — particularly when extended long past their initial rationale. Mental health crises, lost learning, shuttered small businesses, and widespread mistrust were not collateral damage — they were foreseeable consequences.

And we still don’t know how effective many of these policies were because their impacts have not been systematically measured. The “science of pandemics” is inherently messy, but it’s also rich with opportunity — especially now. We are sitting on a mountain of data. The United States’ decentralized, federalist response functioned as a massive, uncontrolled experiment. Some states and districts closed schools for more than a year; others reopened them in months. Some imposed mask mandates and curfews; others did not. Some ramped up contact tracing; others did not try.

All of this variation, combined with detailed demographic, health, education, mobility, and economic datasets, creates an unprecedented opportunity to understand what worked. We can compare how policies influenced hospitalization rates, excess deaths, long Covid prevalence, and downstream outcomes like learning loss and labor force exit.

Did states with longer lockdowns fare better or worse than those with lighter restrictions, once demographics and baseline health are accounted for? Did masking mandates meaningfully reduce hospitalizations? What were the long-term effects of remote schooling, not just academically, but economically and socially? How did essential worker outcomes differ from similarly situated nonessential workers? We don’t have a clue because we haven’t looked.

International comparisons are just as important. Countries took drastically different approaches — from China’s “zero Covid” lockdowns to Sweden’s hands-off model. Now that the virus has broadly swept across the globe, we can use rigorous comparative analysis to determine which strategies actually delivered better long-term outcomes. Were early triumphs just illusions of timing, or did certain approaches genuinely outperform others? Why don’t we know this?

The key is using “big data” modern analytical tools — machine learning, causal inference, time-series analysis — to sift signal from noise. These tools excel at handling complex, multivariate relationships, including confounding variables, and can help us understand not just what happened but why. In many cases, the relevant data already exist. What’s missing is the institutional will and methodological rigor to put it to work.

This failure to interrogate our pandemic response has serious implications for the future. Without real outcome analysis, we’re doomed to apply the same distorted lens to the next crisis. Omniscient, anodyne “lessons learned” and generic calls for coordination and trust are no substitute for the hard, uncomfortable work of figuring out what failed — and being willing to say so. Worse, the lack of honest reckoning deepens a more corrosive legacy of the pandemic: a collapse in public trust. Millions, faced with mixed messages and inconsistent policies, turned to conspiracy theories and lost faith in science altogether. That erosion wasn’t just unfortunate — it was in many cases earned. If science is to reclaim credibility, it must be seen interrogating its own failures, not shielding them. A rigorous, apolitical postmortem won’t be easy — in fact, it may be politically and institutionally impossible. But it is the main path to restoring trust. Without that accountability, the next time science asks the public to listen, fewer people will. And the consequences of that distrust could be catastrophic.

There’s also another deep problem. Public health, as a discipline, has shown an unwillingness to reflect on its own mistakes. Science is supposed to be self-correcting. Public health science hardened into political dogma, critics were dismissed as cranks or partisans, and institutions circled the wagons instead of inviting challenge. That must change if we are truly committed to an evidence-driven future.

If we want to be ready for the next pandemic, we need to take a two-pronged approach.

First, we must disentangle science, culture, and politics in pandemic policymaking. Only by understanding their separate contributions can we begin to build more resilient, evidence-based strategies.

Second, we must broaden our preparedness lens. The next pandemic may not look like Covid-19 — it could be faster-moving, more lethal, or biologically unfamiliar. A narrow focus on respiratory viruses leaves us exposed.

But realizing a data-driven, apolitical review of our Covid response is easier said than done. The very tools that could help — big data, retrospective analytics, real-time genomic surveillance — require institutional trust, stable funding, and a shared commitment to scientific rigor. None of these are guaranteed in an environment where science itself is politicized. The Trump-era erosion of scientific norms, coupled with a broader cultural backlash against expertise, has made honest self-assessment politically risky and professionally fraught.

Meanwhile, many of the institutions best positioned to lead this reckoning — federal agencies like the Centers for Disease Control and Prevention and National Institutes of Health, public health associations, and the major academic journals — have instead moved to defend their past positions, often marginalizing dissenting perspectives. Even journals dedicated to scientific integrity have acted more as custodians of the orthodoxy than as platforms for rethinking foundational assumptions. If we are serious about reform, we need more than better data and analytics — we need interdisciplinary accountability: a willingness for virologists to hear from sociologists, for modelers to engage with ethicists, for epidemiologists to work with clinicians on the ground. Without that cross-disciplinary intellectual openness, we risk learning only what we already believe.

There is still time. As memories fade and political narratives harden, data endures. The pandemic created the conditions for a massive national and global learning opportunity. But only if we’re willing to ask the right questions—and accept the answers, even when they challenge our assumptions.

Without that reckoning, we’re left with a postmortem that looks more like a eulogy than an investigation. And with it, the very real risk that we’ll fight the next war with the tools — and the thinking — that failed us in the last.

Steven Phillips, M.D., MPH, is a fellow of the American College of Epidemiology and vice president of science and strategy for the COVID Collaborative.


Source link

Share This Article
error: Content is protected !!