Post-truth is often viewed as a threat to public affairs such as vaccination policy, climate change denialism, or the erosion of public discourse. Yet combating post-truth is rarely viewed as a priority for policymakers, and the preferred ways of combating it usually take the form of localised epistemic interventions such as fact-checking websites or information campaigns.

The impression that post-truth is not a policy matter, but a purely epistemic one, is strengthened by the standard view that post-truth is indifference to facts. On a popular understanding, post-truth is characterised primarily by lack of concern for the truth. Post-truth believers are people who ‘have decided that telling the truth simply does not matter’ (Ball 2017: 303), and who hold their false beliefs ‘whether there is good evidence for it or not’ (McIntyre 2018: 12). On the standard model, beliefs are detached ‘from verifiable facts’ and the basis for assessing truth is replaced by ‘criteria other than verifiability’, such as personal preference (Kalpokas 2019: 5). Thus on the standard understanding, post-truth is an epistemic failure to follow facts where they lead. If one adheres to this understanding, it seems there is limited scope for the wider social interventions policy-makers usually concern themselves with

Yet as I argue in a recent paper, research into the motivations of non-vaccinating parents – a particular stripe of post-truth believers – shows they are far from indifferent to facts. Indeed, they put in more (not less) effort into verification than people who accept that vaccines are safe on the word of scientists, medical practitioners, or academic publications, and sometimes fund research campaigns into vaccine safety. Similarly, some flat-earthers fund expeditions to view the curvature of the earth, attend conventions, design models of the solar system, and encourage others to ‘do their own research’. The problem, I argue, is not indifference to facts, but misplaced trust in the epistemic sources considered authoritative when gathering evidence. Pariahs of the medical community like Andrew Wakefield, or bizarre anonymous accounts on QAnon are viewed as trustworthy, whereas mainstream guarantors of truth are viewed as tainted by economic incentives (such as big pharma) and hence unreliable.

Focusing on the epistemic authorities that post-truth believers trust reveals that the problem is not a cessation of evidence-seeking, but a pathology of evidence-seeking. Communication channels from mainstream epistemic authorities are closed, and replaced by information coming either from direct experience, biased communities, or disreputable ‘experts’. Verification efforts do not stop, but are ill-directed to pursuits that preclude rather than further veracity. The tendency of post-truth believers to judge the personal incentives of epistemic authorities opens volatile avenues in which everyone can selectively or falsely report information about the interests or rationales of mainstream epistemic authorities, and this arbitrary information ends up impacting which facts are believed or not. Post-truth believers are not closed to facts themselves, but to the mainstream systems of testimony on which evidence flows. On this novel characterisation of post-truth, the underlying problem regards broader social structures which go beyond limited epistemic concerns. Distrust is shaped by configurations of fears, suspicions, discontents, as well as channels or polarised networks that are (mis)used for seeking confirmation. As I put it in the paper, the existence of these wider factors that shape (dis)trust shows remedies to post-truth must target not (just) the misguided acceptance of facts, but undertake ‘the more daunting [task] of remedying networks of weaponised distrust’.

The broader social determinants of post-truth offer scope for public policy remedies to post-truth. Instead of localised epistemic interventions, understanding post-truth as misplaced trust in discreditable sources broadens the scope of intervention beyond the simple dissemination of facts. Remedies should engage with the myriad forms of communication, access, transparency, or stakeholder engagement (to name but a few) which shape public perceptions of the trustworthiness of mainstream epistemic authorities. For example, when it comes to combating post-truth beliefs impacting vaccine hesitancy, improving vaccine uptake requires not (just) disseminating facts about vaccine safety, but (also) a more comprehensive approach to healthcare policy which includes overall better science communication and access to information, taking demands for individualised assistance seriously, designing transparent consumer protection legislation, and improving regulations that ensure the financial incentives of e.g. pharmaceutical companies do not impact research on vaccine side-effects. These areas pertaining to public policy are crucial for tackling post-truth understood as misplaced distrust.

It could be said that such interventions do not guarantee success, since post-truth believers might persist in their beliefs even if sufficient reassurance is provided. In response, it must be conceded that policy interventions are not able to influence unreasonable standards for being reassured, which means the solution is not a sure bet. However, neither is disseminating facts; indeed, throwing facts at people who hold a different opinion has been found to further disagreement (Kahan et al 2012), which means confronting post-truth believers with contrary facts might strengthen them in their false beliefs. Moreover, engaging with the social determinants of mistrust usually takes the form of making services more reliable, transparent, and improving checks and balances, which are valuable in themselves. The latter are aims policy-makers strive for anyway; the added benefit of the present discussion is arguing they are valuable for combating post-truth too.

Moreover, focusing on the determinants of distrust can also reveal new areas for policymakers to focus on. A particular area where policy interventions would significant impact post-truth beliefs is AI governance. In the realm of AI and its relationship with facts, a prominent concern is the alarming pace at which AI can produce plausible falsehoods. From the creation of convincing deep fake images to the fabrication of fake quotes and references, AI has demonstrated an unnerving ability to blur the lines between reality and deception. The origins of post-truth in misplaced trust uncovers a new danger of AI tools, namely that AI might also generate falsehoods about the guarantors of truth. Instances where ChatGPT invents sources and quotations for some of the false claims it generates demonstrate that AI algorithms can ‘hallucinate’ statements about expert bodies, peer-review publications, or other reputable epistemic sources. Their weight can be falsely invoked to back up AI-generated falsehoods. This is significant because while few of us would believe a statement such as ‘water is actually a solid substance’ when the source of the statement is known to be an AI algorithm, individuals might approach AI-generated falsehoods with less scepticism if they are framed by algorithms as originating from expert bodies or other epistemic authorities trusted by the public. For instance, if AI generates a statement like ‘CERN scientists discover that water is actually a solid’ or ‘there is a changing consensus among X organizations about water actually being a solid substance,’ it can more easily mislead individuals into accepting as true that water is in fact a solid.

This problem of painting mainstream epistemic authorities as the source of falsehoods uncovers a need to monitor and regulate how socially recognised guarantors of truth are presented in AI-generated content. Policy-makers can intervene in the area by impacting copyright regulation, liability legislation, or making disclaimers compulsory when the name of expert bodies is mentioned in AI-generated content. Hence, whereas the discussion of policy-making and AI has mostly focused on how AI might improve policy-making, striking a balance between AI innovation and the preservation of truth and trust in the gatekeepers of truth is an area policy-makers should be concerned about.

In conclusion, combating post-truth requires going beyond the simple dissemination of facts and/or fact-checking, and includes understanding and reacting to the broader social determinants of distrust in mainstream guarantors of truth. Improving communication, access, transparency, and stakeholder engagement, as well as reacting to emerging threats through e.g. AI governance, are ways in which policymakers can help build trust in mainstream epistemic authorities.


Ball J (2017) Post-Truth: How Bullshit Conquered the World. London: Biteback Publishing.

Kahan DM, Peters E, Wittlin M, Slovic P, Ouellette LL, Braman D, Mandel G (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature climate change, 2 (10): 732–735.

Kalpokas I (2019) A Political Theory of Post-Truth. London: Palgrave.

McIntyre L (2018) Post-Truth. Cambridge, MA: MIT Press.

Popescu-Sarry, D (2023) ‘Post-Truth is Misplaced Distrust in Testimony, Not Indifference to Facts: Implications for Deliberative Remedies’ Political Studies, 0(0).

Diana Popescu

Diana is an Assistant Professor in Political Theory in the School of Politics and International Relations. She joined the University of Nottingham in 2023, having previously worked at the University of Edinburgh, King’s College London, the University of Oxford, and the London School of Economics. Diana received her PhD in Government from the London School of Economics in 2018, and also holds a Post-Graduate Certificate in Higher Education from the London School of Economics. She co-edits the Beyond the Ivory Tower series for the Justice Everywhere blog, which publishes interviews with political thinkers who have made an impact on public matters through their work.