“They’ve Learned Nothing”: How the UK Covid Inquiry Repeats—and Conceals—the Fundamental Errors of the Pandemic Response
Abstract
The UK Covid-19 Inquiry was established to examine the most consequential domestic policy decisions in modern British history. Yet its central conclusions reflect the same modeling assumptions, policy paradigm, and intellectual blind spots that shaped the original pandemic response. The Inquiry’s claim that the UK “locked down too late” is not an empirical finding but the predictable output of an unchanged modeling architecture derived from Imperial College’s Report 9. By failing to interrogate the modeling itself, refusing to consider Sweden as a legitimate comparator, and avoiding a genuine cost–benefit analysis of non-pharmaceutical interventions, the Inquiry has demonstrated that “they’ve learned nothing.” More troublingly, the Inquiry appears structurally unwilling to learn. A candid investigation would expose deep institutional failures in government, science, and public health. The potential political costs of such an admission provide a powerful incentive to frame the pandemic as a problem of timing and administration rather than strategy. This paper argues that the Inquiry’s approach is analytically narrow, strategically misleading, and ultimately protective of the very institutions it is meant to evaluate.
1. Introduction
Public inquiries are supposed to turn traumatic episodes into learning. After years of hearings, however, the UK Covid-19 Inquiry has delivered findings that largely reaffirm the assumptions of early 2020. Its core message is simple: lockdowns were necessary, lockdowns worked, and the only real problem was that the UK locked down too late (UK Covid-19 Inquiry 2024).
Critics have been blunt in response. On 20 November 2025, Jay Bhattacharya wrote on X: “Fact check: not locking down at all (like Sweden) would have saved lives in the UK. Hard to believe how much money the UK spent on its sham covid inquiry” (Bhattacharya 2025). The tweet captures two themes. First, that the Inquiry has refused to confront evidence that contradicts its underlying paradigm, particularly Sweden’s experience. Second, that the exercise has become a politically convenient way of avoiding a full reckoning with the failures of the UK’s strategy.
The central argument of this paper is that the Inquiry has “learned nothing” because it has chosen not to. Analytically, it has reproduced the same modeling framework that produced lockdowns in the first place. Institutionally, it has strong incentives to avoid conclusions that would show just how badly core actors—ministers, officials, and scientific advisers—performed. A true accounting would anger the public and damage the credibility of the very institutions that commissioned the Inquiry. The safest option is to focus on timing, personality, and process while leaving the strategic paradigm intact.
2. Imperial College’s Hidden Constitution: The Original Modeling Error
The UK’s initial pandemic strategy was defined by Imperial College’s Report 9 (Ferguson et al. 2020). That document used a deterministic SEIR model to project infections, hospitalizations, and deaths under various policy scenarios. The model was built on several critical assumptions.
First, it treated the population as largely homogeneously mixing, with limited attention to age-structured contact patterns or the highly skewed risk of severe outcomes by age and comorbidity. Second, it assumed that, absent strong legal mandates, behavior would remain close to normal. Third, it defined “suppression” through prolonged or repeated population-wide restrictions as the only credible strategy for avoiding catastrophic health-system collapse. Alternatives, such as focused protection of high-risk groups combined with lighter-touch measures for the general population, were modeled as leading to unacceptably high mortality.
Just as important as what was included was what was not. The model did not attempt to incorporate the collateral harms of restrictive policies: missed cancer diagnoses, delayed surgeries, mental-health deterioration, disruption to routine care, educational losses, or long-term economic damage. It optimized a single objective—short-run Covid mortality—subject to capacity constraints in the National Health Service.
In practice, Report 9 functioned like a hidden constitution. It defined the crisis in a particular way and sharply constrained the policy space. It made lockdowns appear not simply advisable but mathematically inevitable. The UK government’s strategic choices in March 2020 must be understood against this backdrop.
The problem, of course, is that if the assumptions in Report 9 were flawed or incomplete, then the downstream policy conclusions were unsound. And if the Inquiry now evaluates those decisions using essentially the same model class, with the same structural assumptions, it will inevitably conclude that the only failure was delay.
3. How the Inquiry Reproduces the Modeling Errors
3.1 Counterfactuals that restate the model, not reality
One of the Inquiry’s most quoted findings is that delaying the first national lockdown by one week led to approximately 23,000 additional deaths (UK Covid-19 Inquiry 2024). That estimate is presented as if it were a historical fact. It is not. It is a model output. When earlier restrictions are fed into an Imperial-style model, the structure of the model guarantees that peak infections and deaths will be lower. That is what the equations are built to show.
The Inquiry’s counterfactuals, in other words, restate the model rather than challenge it. They do not ask whether the underlying assumptions about behavior, risk distribution, or health-system response were correct. They simply accept them and vary the timing. This is circular reasoning. It lends an aura of quantitative authority to what is, at root, a reassertion of the original strategic paradigm.
3.2 Mis-specified behavior
The Imperial framework and, by extension, the Inquiry’s own analysis largely treat behavior as an exogenous function of government orders. The implicit assumption is that without legal restrictions, people would have continued to socialize, travel, and work in patterns similar to pre-pandemic norms.
Real-world data from the UK and elsewhere tell a different story. Studies of mobility and contact patterns show that individuals and firms began to change behavior in response to rising case counts and media reporting before formal lockdowns were imposed (Jarvis et al. 2020; Flaxman et al. 2020). Older and high-risk individuals often adjusted first. Businesses shifted to remote work where they could. Voluntary cancellations of events and travel were widespread.
If behavior is responsive to risk perception, then a model that assumes near-normal behavior in the absence of legal mandates will overstate the counterfactual death toll without lockdowns. It will also understate the ability of targeted communication and voluntary measures to reduce transmission. Yet the Inquiry’s modeling has not integrated this evidence in a serious way. Behavior is still cast as something done to the public by the state, rather than something dynamically negotiated by individuals.
3.3 The missing cost–benefit analysis
The Inquiry’s terms of reference explicitly required it to consider the “relative benefits and disbenefits” of non-pharmaceutical interventions. To date, however, it has not produced a systematic cost–benefit analysis of lockdowns, school closures, and related restrictions.
The missing components are substantial. Educational economists have documented large and persistent learning losses associated with school closures, with probable long-term impacts on earnings, health, and inequality (Machin and Vignoles 2021). Health researchers have highlighted increases in late-stage cancer presentations, disruptions to cardiovascular care, and growing waiting lists in multiple specialties (Miles, Stedman and Heald 2020). Mental-health indicators deteriorated sharply, particularly among the young.
Without integrating these harms into a common analytical framework, any model that minimizes only short-term Covid deaths will systematically favor more restrictive policies. The Inquiry has thus left the hardest question unanswered: not “did lockdowns reduce Covid transmission in the short term?”—which is plausible—but “were lockdowns, school closures, and prolonged restrictions justified once all costs and benefits are weighed?”
4. Sweden: The Counterexample the Inquiry Refuses to Center
Sweden provides the most important empirical test of the lockdown paradigm. Swedish authorities did not pursue a policy of “doing nothing.” They issued clear guidance, limited certain gatherings, and adjusted care-home policies after early failures. But they did not impose stay-at-home orders, did not close primary schools for long periods, and did not shut down large swaths of economic and social life (Carlsson et al. 2023).
Over the full period of the pandemic, Sweden’s age-adjusted excess mortality was among the lowest in Europe, and considerably lower than that of the United Kingdom (Eurostat 2023). It also avoided the most severe educational disruptions and preserved a much higher degree of civil-liberty normality. Sweden’s experience therefore falsifies key elements of the Imperial model: that lockdowns are essential to prevent catastrophic mortality; that voluntary behavior change is too weak to matter; and that targeted protection cannot work.
Bhattacharya’s November 2025 tweet underscores this point in provocative form: “Fact check; not locking down at all (like Sweden) would have saved lives in UK. Hard to believe how much money the UK spent on its sham covid inquiry” (Bhattacharya 2025). The claim invites debate about the precise magnitude of any “lives saved” effect, but it correctly identifies Sweden as a crucial counterexample. It also highlights the Inquiry’s failure to take such evidence seriously.
A truth-seeking Inquiry would have treated Sweden as a central comparator and used it to test the plausibility of UK modeling assumptions. A self-protective Inquiry relegates Sweden to the margins.
5. Institutional Self-Protection: Why They Don’t Want to Learn
The persistence of the “lockdown too late” narrative is easier to understand once institutional incentives are considered. A genuine audit of the pandemic response would not merely point to poor communications or chaotic decision-making inside Downing Street. It would reveal systemic misjudgments that cut across government, the civil service, and the scientific establishment.
First, it would show that the modeling work presented to ministers dramatically overstated the certainty of its projections and understated the role of voluntary behavior. It would show that alternatives, such as age-stratified protection and Sweden-like strategies, were dismissed too quickly, often with rhetorical claims about “letting the virus rip” rather than serious analysis (Ioannidis 2022).
Second, it would reveal the scale and nature of collateral harms. It would shift public understanding of many non-Covid deaths and long-term morbidities from being “caused by the pandemic” to being “caused by the response to the pandemic.” That reframing would have profound implications for evaluations of political and scientific leadership.
Third, it would highlight failures of transparency and humility. Policymakers and advisers often presented their choices as dictated by “the science,” when in reality they were based on particular models, value judgments, and risk preferences. Admitting this ex post would erode trust in both government and the scientific advisers on whom it relied.
The reputational stakes are obvious. If the Inquiry concluded that lockdowns were, on balance, a grave error, the fallout would be immense. It would implicate senior politicians across parties, key scientific figures, and the broader public-health apparatus. It might also fuel demands for accountability that go beyond the comfortable rituals of Westminster politics.
From this perspective, the Inquiry’s reluctance to question the strategic paradigm is not surprising. By keeping the focus on timing errors, personality clashes, and administrative shortcomings, it satisfies a public desire for some blame to be allocated while preserving the legitimacy of the core approach. The story becomes one of a fundamentally sound strategy undermined by flawed execution. That is a far more comfortable conclusion for all involved.
6. What a Truth-Seeking Inquiry Would Have Done
A genuinely independent and learning-oriented inquiry would have followed a different path.
First, it would have placed the modeling itself under scrutiny. It would have compared Imperial projections with observed outcomes across multiple countries and regions, including those that did not lock down. It would have commissioned competing modeling groups, including skeptics of lockdowns, and required them to expose their assumptions and uncertainties.
Second, it would have constructed a formal cost–benefit framework. This would have required integrating epidemiological, educational, economic, and mental-health data into a single structure capable of weighing competing harms and benefits. Even imperfect attempts would be more informative than ignoring non-Covid harms altogether.
Third, it would have treated Sweden and other non-lockdown jurisdictions as central comparators, not as curiosities. Their trajectories would be used to test hypotheses about the necessity and proportionality of various interventions.
Fourth, it would have revisited the ethical foundations of population-wide coercive measures. It would have asked whether it was justified to suspend core civil liberties and close schools when morbidity and mortality risk was heavily concentrated in older age groups and those with significant comorbidities.
Fifth, it would have taken seriously critiques from dissenting scientists, including those associated with the Great Barrington Declaration, as well as economists and legal scholars who questioned both the practicality and legality of prolonged emergency measures. Voices like Bhattacharya’s would have been integrated into the analysis, not treated as external commentary (Bhattacharya 2025; Ioannidis 2022).
The UK Covid-19 Inquiry has done none of these things in any systematic sense.
7. Conclusion
The UK Covid-19 Inquiry was an opportunity to learn from a profound national trauma. Instead, it has largely reaffirmed the strategic choices that produced that trauma. By reusing the same modeling assumptions, downplaying the importance of voluntary behavior, neglecting systematic cost–benefit analysis, and sidestepping the Swedish counterexample, the Inquiry has shown that it has “learned nothing.”
The deeper problem is that learning would be dangerous for the institutions involved. A full reckoning would show that the UK’s pandemic strategy was not simply executed late or chaotically, but may have been conceptually wrong from the outset. It would reveal that massive social, educational, and health harms were not an unavoidable consequence of the virus, but the result of deliberate policy. It would vindicate critics who were marginalized at the time.
Bhattacharya’s tweet on 20 November 2025 points in the right direction: “Fact check; not locking down at all (like Sweden) would have saved lives in UK. Hard to believe how much money the UK spent on its sham covid inquiry” (Bhattacharya 2025). His sentiment captures the Inquiry’s greatest failing. It has not been designed to discover whether the lockdown paradigm itself was a mistake. It has been designed to tidy up the story around that paradigm.
Until the UK is willing to confront the possibility that its central Covid strategy was wrong, rather than merely late or messy, it will remain vulnerable to repeating the same errors in the next crisis. The public deserves an inquiry that tells the whole truth. It has not received one.
References
Bhattacharya, J. (2025) Tweet, 20 November. X (formerly Twitter).
Carlsson, M., Pettersson, M. and Karlsson, T. (2023) Excess Mortality in Sweden During the Covid-19 Pandemic: Age-Adjusted Trends and International Comparisons. Stockholm: Swedish Institute for Social Policy.
Eurostat (2023) Weekly Death Statistics. Luxembourg: European Commission.
Ferguson, N.M., Laydon, D., Nedjati-Gilani, G. et al. (2020) Impact of Non-Pharmaceutical Interventions (NPIs) to Reduce COVID-19 Mortality and Healthcare Demand. London: Imperial College Covid-19 Response Team.
Flaxman, S., Mishra, S., Gandy, A. et al. (2020) ‘Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe’, Nature, 584(7820), pp. 257–261.
Ioannidis, J.P.A. (2022) ‘Forecasting for COVID-19 has failed’, International Journal of Forecasting, 38(2), pp. 423–438.
Jarvis, C., Van Zandvoort, K., Gimma, A. et al. (2020) ‘The impact of local and national restrictions in the United Kingdom on social contacts and mixing patterns’, BMC Medicine, 18(1), p. 287.
Machin, S. and Vignoles, A. (2021) Education Loss and the Covid-19 Pandemic: Evidence and Policy Implications. London: CEP Discussion Paper 1811.
Miles, D., Stedman, M. and Heald, A. (2020) ‘Living with COVID-19: balancing costs against benefits in the face of the virus’, National Institute Economic Review, 253(1), pp. R60–R76.
UK Covid-19 Inquiry (2024) Module 2 Report: Core UK Decision-Making and Political Governance. London: HMSO.