I sometimes play a game with my cat, Cleo. I stand around the corner from her, just out of sight. She starts to sneak towards me. When I poke my head around the corner to look at her, she freezes. When I pull back, she carries on sneaking. Eventually, she pounces on my ankles.
In my mind, we’re playing Statues – the children’s game in which the aim is to sneak up on someone without that person seeing you move. But Cleo can’t be thinking of it that way. So, what’s she thinking? Why does she freeze when my head pops around the corner?
Here’s an obvious answer. She’s trying to attack my ankles. She stops when I stick my head out because she knows I can see her, and knows that if I see her coming I’ll know she’s coming, and she’ll lose the element of surprise.
This obvious answer assumes that Cleo is a mindreader. Now, I don’t mean by this that she’s telepathic. Psychologists and philosophers use the term ‘mindreader’ to refer to someone with the ability to ascribe mental states to others. I’m a mindreader in this sense, and so are you – because we can make judgments about what others are thinking, feeling, seeing, and so on. Our answer assumes that Cleo is a mindreader because it involves her making a judgment about what I can see (her approaching), and about what I know (an attack is imminent).
But can Cleo really ‘read minds’? Does she even have a concept of seeing or knowing? To be clear, my question is not whether Cleo sees and knows things, but whether she knows that I see and know things – or that any animal does. When she hunts, does she consider whether her prey knows she’s there? When she faces off with the neighbour’s cat, does she realise that it sees her?
This is a surprisingly difficult question to answer – not just about Cleo, but about any non-human animal.
Why? Surely to answer it we’d just need a test. We’d need a situation in which a mindreader would make a particular judgment (that I can see, say) and would consequently behave in a certain way. Then we’d change the situation slightly so that a mindreader would now make a different judgment (that I can’t see) and so would behave differently. If Cleo behaved as a mindreader should in both situations, that would be evidence that she was a mindreader.
But wait: haven’t I just described such a test – and didn’t Cleo pass? In our game of Statues, Cleo behaves as a mindreader should: she stops when I see her, and moves when I don’t. So, isn’t this evidence that she’s a mindreader?
"What is the rational thing to believe about whether animals read minds? A natural answer, perhaps, is that we should suspend belief. If our evidence is consistent with both theories, how can we rationally prefer one to the other?"
It’s not quite that simple. It’s true that Cleo behaves like a mindreader here. The question is, though, whether she behaves in this way because she has made a judgment about my mental state. Might her behaviour actually be a response to something else?
Matters are complicated by the fact that when my mental state changes in Statues, something also looks different from Cleo’s perspective. When I can see Cleo, she can see me; when I can’t, she can’t. Perhaps this observable difference, rather than whether or not I see her, explains the change in Cleo’s behaviour. Perhaps she uses a strategy like ‘freeze if you see the face’, which simply exploits the observable cues. If that’s right, Cleo doesn’t make any judgment about what I can see – she just responds to the appearance of my face. So, this ‘Statues’ test is ineffective: it fails to discriminate mindreading from what’s known as ‘behaviour-reading’ – using behavioural strategies based on observable cues.
An effective test would be one which told us whether Cleo’s behaviour varied in response to my mental state, and not just in response to an observable cue. Ideally, the test would involve manipulating my mental state and looking for any change in Cleo’s behaviour, whilst keeping everything else the same. If something else changed alongside my mental state, this would undermine the test – because there’d be something else which might explain any change in Cleo’s behaviour.
The problem is that it’s nearly impossible to manipulate my mental state without changing anything else. When I manipulate whether I see Cleo by sticking my head around the corner, I also manipulate whether Cleo can see my face. Most other ways of manipulating what I see are no better. I might face towards her or face away; wear a blindfold or not wear one; have my eyes open or have them closed. Each time, I’m changing the situation’s observable features – and Cleo might respond to this change, rather than to my mental state.
Now, I said it’s nearly impossible to avoid this. It’s not quite impossible – but this is the second part of the problem. Even if I could manipulate my mental state without manipulating the observable cues, that wouldn’t help. Suppose I install a covert surveillance camera – so that I can see Cleo without sticking my head around the corner. I manipulate whether I can see her by turning the camera feed off and on. From Cleo’s perspective, there is no observable difference between these situations. So, if she continues to freeze when I can see her and move when I can’t, we can rule out the possibility that she is responding to the observable features of the situation.
But the problem here is obvious. In this situation, we can’t reasonably expect Cleo to freeze when I can see her – since she doesn’t, at any point, have any reason to think that I can see her! Mindreaders make judgments about what others can see on the basis of observable cues like whether their faces are visible, their eyes open, etc. If these observable cues are eliminated, mindreaders have no evidence on which to base judgments about mental states – unless they are telepathic! So, whilst these observable cues seem to undermine the validity of tests for mindreading, they are also – paradoxically – essential.
This problem, originally set out by the psychologists Daniel Povinelli and Jennifer Vonk, has become known as the ‘Logical Problem’, because it highlights a flaw in the ‘logic’ of behavioural tests for mindreading. It has been plaguing research into non-human mindreading for over a decade. In that time, for instance, chimpanzees have repeatedly ‘passed’ tests intended to determine whether they know that others can see. But for each test, the same problem arises: it is possible that the chimpanzees are behaviour-reading – using strategies like ‘steal food if there’s an opaque barrier between the food and its owner’, or ‘don’t beg from a human whose eyes you can’t see’.
Some researchers have argued that the Logical Problem can’t be solved by devising cleverer tests: that for any possible test, the same problem will arise. If they’re right, then animal mindreading is an area in which our evidence will always underdetermine our theory. That is, our evidence will always be consistent with two incompatible theories: that animals are mindreaders, or that they’re merely ‘behaviour-readers’.
So, what is the rational thing to believe about whether animals read minds? A natural answer, perhaps, is that we should suspend belief. If our evidence is consistent with both theories, how can we rationally prefer one to the other?
But this is too quick. Underdetermination is more common than you’d think. Here’s an example, due to the philosopher Christopher Peacocke. It’s consistent with my evidence that the person I share an office with is a ‘Martian Marionette’ – that is, a mindless puppet whose behaviour is controlled remotely by aliens. It’s also consistent with my evidence that he’s a real person. So, my evidence about my office-mate underdetermines my theory.
"If our evidence underdetermines our theory about whether animals read minds, we might reasonably prefer the theory that best explains the evidence. So now the question becomes: which theory best explains the evidence?"
Surely, though, it’s not unreasonable for me to believe he’s a real person! One reason to think so is that the ‘Martian Marionette’ theory is just a terrible explanation of my evidence. The ‘real person’ theory is a far better one, so it’s reasonable to prefer it. This is an instance of what’s called ‘inference to the best explanation’ – a pattern of inference common in both science and everyday life, according to which we should prefer the theory that best explains our evidence. Similarly, if our evidence underdetermines our theory about whether animals read minds, we might reasonably prefer the theory that best explains the evidence. So now the question becomes: which theory best explains the evidence?
Here, though, things get even more difficult – because to answer this question we’d need to know what makes one explanation better than another. But what makes an explanation good? What about simplicity — is a simpler explanation a better one? If so, what makes an explanation simple? Advocates of both the ‘mindreading theory’ and the ‘behaviour-reading theory’ each claim their theory is the simpler one. Who is right? Are there different types of simplicity? If so, which of them is relevant for deciding whether an explanation is good? And so on…
These are tough questions – and very obviously philosophical ones! But answering them is likely to be key to figuring out whether animals read minds. This is a concrete illustration of the relevance of philosophy, even to questions which seem straightforwardly to belong to science. After all, we were led to consider these very abstract questions by our attempt to answer what seemed a much simpler, and more obviously empirical one: what my cat is thinking when our eyes meet over a game of Statues.