Facts won't win the conspiracy war

Where did all the facts go?

The idea we can combat conspiracy theorists with facts is an illusion. We view the audiences of conspiracy influencers like Alex Jones as naïve and in need of educating: if only they could see the facts, they’d see the error of their ways. But the world of conspiracy, far from abandoning facts and reason, has claimed them stylistically as their own. Challenging the facts is not enough. We need a whole new mythology surrounding truth and truth-seeking. Writes Sun-ha Hong.

 

After every storm comes a limbo. The panic around the fake news and the end of facts has passed from a ubiquitous din to a familiar, if discomforting, elephant squatting in the room, occasionally reminding us: how exactly does mis/disinformation work? What makes so many of us so susceptible to it? Does all the fact-checking even help?  While there are many different forms of information pollution – the contamination of our media environment with low-quality or misleading information – the nature of this pollution is not a simple divide between good information and bad, and it is not effectively solved by stuffing people with more and more media literacy. Historians have reminded us that forms of ‘fake news’ have ever been a part of large-scale media systems, and that panics around mis/disinformation even date back to Antiquity: the very practice of history, after all, is often traced back to Thucydides, who sought to establish consistent standards for verifying information in his record of the Peloponnesian War.

Such observations can lead us to a more realistic model of how misleading information becomes appealing. Too often, we are caught in ‘fact nostalgia’ – a convenient fantasy that prior to our current woes, facts were facts and society knew to listen to the most qualified experts. Instead, we must begin with the recognition that, by and large, people do not operate as dispassionate information-processing robots, swallowing messages they consider most rational and spitting out the ones they consider unreliable. We often imbibe media in habitual and ambient ways, and we become attached to factual claims or ideas for the emotions and social relationships they offer as much as their informational quality. We think of facts as epistemic objects, but they can also be deployed as aesthetic ones.

24 03 21 ai turing test is flawed SUGGESTED READING The Turing Tests of today are mistaken By Raphaël Millière

Consider the case of what Rebecca Lewis has called the ‘alternative influencer network’ (AIN): a loose umbrella of largely Youtube-based, US-centric influencers who present themselves as disrupting established experts with a fearless, transgressive pursuit of rational analysis and objective truth. Such figures’ success at personal branding, partly through their active leveraging of culture war controversies, has made them a major amplifier of misinformation and conspiracy theory around issues like the 2020 US elections and the COVID-19 pandemic.

Yet when people become avid, regular consumers of such content, they often do so alongside significant ambivalence about just how trustworthy that content is. Regulars of the infamous conspiracy peddler Alex Jones, for instance, lean back on the narrative that while influencers like Jones ‘may not get everything right’, they will investigate things other outlets are too afraid to say. Indeed, many such influencers lean into this dynamic: a common refrain is that they are ‘just asking questions’: that by introducing a climate skeptic theory to the audience or inviting an anti-vaxxer onto their show, they are simply providing a neutral and rational forum for free-thinking viewers to make up their own minds.

What emerges from this picture is an osmosis model of information ingestion and pollution. Often, we do not operate border controls when consuming information, reviewing each message for its factuality; instead, we are often porous, habitual subjects, gradually leaning into different political and cultural dispositions through ambient and ambivalent practices of consumption. 

These patterns further emphasise the question of what people get out of polluted information – or, in other words, what people might find satisfactory or useful about content that we might otherwise label as misleading or inciting hate. Stories that are factually bankrupt but go wildly viral tend to do so by tapping into ‘deep memetic frames’: underlying cultural templates, such as those around government overreach and Wild West individualism, to which we often develop long-term attachments and which shape our underlying attitudes to new information. A deep-seated sense that governments work invisibly to surveil and limit personal freedoms, for instance, helps till the soil for the suggestion that a school shooting was a ‘false flag’ operation.

When new events and crises occur, we engage in often shared and collective processes of sensemaking, trying to align what we think we are seeing with our underlying frames about how the world works. Rumours, sensationalism, and misinformation become particularly attractive when the stories they offer become useful for this sensemaking activity. The key question, then, is not so much what makes people believe in ‘bad’ information, but what that information might allow people to feel – and what kind of position it allows them to take up in other narratives and debates.

In this sense, it is revealing that many well-known amplifiers of high-profile misinformation and conspiracy theory embrace the language of facts, truth, and Reason. Staying with the US context, consider the Alex Jones fan’s rationalisation from earlier: that Jones may not always be right, but that “being unafraid to question things out loud … has a long history in this country.” This positioning offers a way to embrace an identity of the heroic free thinker and a true American. The conservative pundit Ben Shapiro, who advertises himself as a fast-talking, super-smart Harvard graduate, brands himself with the slogan: “Facts don’t care about your feelings”. This pleasure of militant, self-righteous factuality culminates in a popular slogan and meme for the 2016 Trump campaign: “Fuck Your Feelings”.

SUGGESTED VIEWING The secrets of irrationality With Dan Ariely

In my work, I call this fact signalling – a practice where the stylistic tropes of logical thinking, scientific research, or data analysis is worn like a costume to bolster a sense of moral righteousness and certitude. False accusations of electronic voting fraud in the 2020 elections thus featured incoherent, pseudo-technical diagrams and explanations about data packets and cybersecurity, while Alex Jones is well known for brandishing reams of paper as ‘research’ for his claims – papers which, often enough, turn out to simply be printed copies of his own articles. In the US, political disinformation often involves ‘evidence collages’ – image files packed with objects that look and feel like evidence, such as documents stamped confidential or blurry and redacted emails.

Such techniques are not necessarily about producing a watertight pretence of reliable information, but enabling consumers to enjoy a sense of intellectual superiority, moral certitude, or even solidarity. Researchers like media theorist Wendy Chun and media historian Fred Turner have shown how postwar America has been dominated by a ‘transgressive hypothesis’ – in which organised knowledge production, such as mainstream media or university science, is repeatedly suspected as corrupt and un-American, requiring individuals to break the rules to find authentic truth. Crucially, this logic allows any transgressive factmaking to feel authentic, and even heroic, solely by the virtue of its transgressiveness. The more outlandish or criticised, the better. This broader sentimentality is what makes fact signalling particularly powerful in the US – though it is found in different shades and stripes elsewhere – regardless of how much research or intellectual honesty is actually present.  

SUGGESTED VIEWING The impartiality illusion With Sophie Scott-Brown, Philip Collins, Florence Read, Matthew Goodwin

The role of social media platforms and ranking algorithms in making such transgressiveness profitable has often been overstated; it remains unclear when, if, and to what extent echo chambers or filter bubbles form to lead users down rabbit holes. What is clearer is that technologies like social media platforms and generative AI accelerate and obfuscate the circulation of unverified information, undermining the idea that a rational, educated individual might be able to ‘do their own research’ and sort the wheat from the chaff. Rather than more fact-checking or media literacy campaigns that ask individuals to bear the burden, the path forward must involve a more fundamental shift – away from sorting, visibilising, and rewarding information on the basis of ‘virality’.

The discourse of misinformation operates within a mythology of the righteous pursuit of knowledge, locking transgressive truth-seekers and purveyors of fact signalling in a feedback loop. Rather than looking to The Algorithm as the engine of a novel mis/disinformation crisis, we need to ask a broader and more difficult question about common frames for how we make sense of the world – of which the transgressive truth-seeker is but one example – and how these frames make us as a whole vulnerable to certain kinds of stories.

Latest Releases
Join the conversation