Changing How the World Thinks

An online magazine of big ideas

more

The Tyranny of Evidence

From climate change to cancer scares, we’re in love with ‘the evidence’. Is it time to end the affair?
Rupert Read 34
He was using his power to get sex. Photograph: Pascal Le Segretain/Getty Images

A central aspect of my philosophical work these days is this: to warn against over-estimating, for example, how much one can learn from past financial crises, in thinking about future financial crises. How much, to put it in more general – and philosophical – terms, one can learn inductively. There is plenty one can learn; but there is also a severe limit on what one can learn. There is a limit, in other words, on the value of evidence.

The danger of not being continually aware of this point is that one may think, at least unconsciously, that there are specific lessons to learn and that, once one has learnt them, then one's job is done and one has genuinely ensured as best one can that there will not be further such crises in the future.

This would be a hubristic stance. Hubris, in the long run, inevitably leads to nemesis.

For we are always going to be living in a social world that defies full comprehension and control. A world that we do not and never will fully understand, as my colleague Nassim N. Taleb puts it, in his 2012 book Anti-Fragile.

The real challenge, the deep thing that one has to learn, is how best to seek safety for the economy, for citizens, in such a world: in a world that one accepts as a world one cannot predict and control.

That we live in such a world is revealed by financial crises. In fact, that might itself be justly said to be the deepest thing one can learn from them. 

This is the challenge we face: to learn to live more safely in a world that we are never going to be able to understand or control or even ‘manage’. This entails a ‘letting-go’.

But the alternative is worse: that, by seeking to manage, to master, our world, we give ourselves a false assurance that all is going to be well, and make it more likely that we will ‘blow up’.

Now: it is of course an excellent thing to seek to learn from history – from the history, for example, of past financial crises. Hyman Minsky and John Maynard Keynes are among the maestros of having done so.

But there is danger in such learning, too. One such danger is what Taleb calls 'the narrative fallacy': falling into the trap of seeing in history an anticipation of all future possibilities, rather than (as one ought to) seeing in history only a tiny sub-section of what could have happened, let alone what might happen in the future.

When seeking to minimise the chances of financial crises in the future, one ought to focus most of one's attention on what can be done to build down the risk of 'black swans’ (Taleb, 2007); rare, devastating, inherently-unpredictable events. By definition, such events are vanishingly rare in the historical record, and what such events there are are only a very poor sample of the possible such events that there could be.

The philosophical work that I am undertaking at present jointly with Taleb is devoted to exploring these and related thoughts in relation to financial crises, and to other such black swans (e.g. in the environmental sphere). In particular, Taleb and I are formulating a version of the Precautionary Principle not vulnerable to the kinds of objections usually made against it (by Cass Sunstein - the author of Nudge, and others).

The Precautionary Principle (PP) states, basically, that, where the stakes are high, a lack of full knowledge or of reliable models – a lack of certainty – should not be a barrier to legitimate precautionary action. We shouldn't, in other words, need certainty, in order to justify protective action.

Invoking precaution is thus an alternative to or a complement to invoking evidence. Our contemporary politics, economics, risk-management, medicine and science are all fixated on evidence and on being 'evidence-based’. My argument is that this is dangerous. One can’t have ‘evidence’ of things that haven’t happened yet, nor to any meaningful degree of things that are very rare, nor to any meaningful degree of things dependent upon human decision.

Why can’t you know the social world and manage it?

There are two main reasons:

(1) The social world is a made up of – constituted by - understandings (Winch, 1958, 1990; Read, 2008). Of interpretations. It defies scientisation. It is an illusion to think that it can be known as the physical world can be known, ‘from the outside’. It can only be truly known ‘participatorily’.

(2) Even if, per impossibile, the social/economic world could be known scientifically, it still could not be controlled/managed. Because it is a moving target (Read, 2012). Because human beings respond to attempts to know them: by seeking to make them true, or by seeking to make them false, or in other ways. There are many examples of this. A famous and salient one is Goodhart’s Law. A simpler version of the point is encapsulated in a marvellous remark made by Louis Armstrong (or sometimes attributed to Humphrey Littleton ) on the future of jazz: “If I knew where jazz was going, I’d be there already…”

There are limits, unsurpassable limits, on the knowability of our future.

We might think that as we come to know more, these limits will recede. But this is not true. The PP is increasingly relevant, due to man-made dependencies that propagate impacts of policies across the globe: this applies strongly to globalised economic and financial systems and to globalised ecosystems (e.g. the climate system). In contrast, absent humanity, the biosphere engages in natural experiments due to random variations with only local impacts.

Now, the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. GMOs are one good example (see my recent ‘evidence’ to Parliament on this): they represent a public risk of global harm. The PP should be used to prescribe severe limits on GMOs. Likewise, the PP should be used to proscribe various forms of financial behaviour that have a potentiality to unleash black swans.

In conclusion:

i) The social world is necessarily partly opaque to social/‘scientific’ knowledge, precisely because it is constituted by human beings, who are intrinsically understanders, intrinsically responsive to efforts to know them, etc.

ii) We need to be less fixated on the evidence, where the human world is concerned, and more determined to take up a precautionary stance. The stakes are high. It would be wrong to gamble, in such a situation. And being ‘evidence-based’, I have shown, is, ironically, being just such a foolish and unethical gambler.

In sum, what’s more reliable than evidence? Precaution.

 

An earlier version of this article first appeared in The Philosopher’s Magazine online.

Image credit: Bill Selak

 

 

 

 

Join the conversation

Sign in to post comments or join now (only takes a moment). Don't have an account? Sign in with Facebook, Twitter or Google to get started:

David Morey 2 on 21/08/2015 9:42pm

Good stuff. Control, reducing potential outcomes, doing controlled experiments give us good science, but natural reality is very different being unmeasurably complex, open, involving meaning, interpretation, emergent phenomenon, creative responses, emotions, desires and one off contingencies; so good idea to recognise the limits and context in which we can try and apply whatever knowledge we are able to muster. Every event starts off being new, unique and surprising, until it does the same again and forms a pattern.

iai donation
iai donation
iai donation
Why sign up for the iai?
  • Discover new ideas
    Free and unlimited access to hundreds of hours of debates, talks and articles from the world's leading minds, as well as courses that rival top academic institutions.
  • Have your say
    Join the iai community and engage in conversation and debate around the issues that matter.
  • Hear it first
    Be the first to hear about our video releases, articles and tickets to our upcoming events.
Sign me up