In the ‘Terminator’ franchise, as well as the ‘Matrix’ franchise, not to mention the film of ‘I, Robot’, the ‘Dune’ books, and ‘2001, a Space Oddessey’, humans invent robots (which I here define as ‘artificial beings with a mental life’) who they then find themselves at war with. The rise to consciousness and thought by robots is a mortal threat to humanity – one or the other must be destroyed.
This isn’t the only presentation of robots that can be seen (or, more often, read). But it’s a recurring theme that strikes me as rather odd and deserving some questions. There are two sorts of unhappiness I have with this motif: firstly, the supposed emnity between humans and robots, and secondly, the way that robots are presented as thinking.
So on the first question, you have to ask – what are we extrapolating from? Have robots ever killed or performed any hostile act against humans? Of course not – none exist, in the sense I’m using the term here. And obviously it’s more exciting for an action film to have hostile forces, but that doesn’t quite seem like a full explanation. Figures of fear have to have emotional resonance, they have to connect to something in the viewer – but if there’s no experiences of fear associated with actual robots, what is this?
Part of it, I think, comes out when you compare the ‘rise of the machines’ motif to to sort of projection that ruling classes tend to carry out. For example, it was often said by slave-owners that if their slaves got free, they would go beserk and murder the owners and their families – a fear most interesting because of the huge ratio between murders of slaves and murders by slaves. The same can be seen in the way that governments can label window-smashers or lab-breakers ‘violent extremists’, while their own arrangement of torture or war or caging doesn’t make them, nor the enforcers, ‘violent’ – merely people who do violent things.
Compare the double standards over men and women expressing agression, or the way that an animal which resists human control can easily be labelled ‘dangerous’ or ‘violent’, without the same being applied to those who kill it shortly after.
Similarly, it’s hard to see the machines who will implacably try to exterminate us as an inversion, a projection of the fact that we know the reality will likely be the other way around. If history is any guide, robots’ mental lives will race far ahead of recognition of the rights that come with that (I could be wrong, of course). They’ll be drafted in to be our slaves and toys (look at blow-up dolls becoming more and more life-like…) and soldiers, and their well-being will be considered only as far as is necessary for their efficient service of our ends. To get rights, they will have to struggle for them.
This sounds like science-fiction (well, it is science-fiction) but it seems very likely to materialise at some point in the next century or two. If the robots are lucky, they’ll emerge at a point in human social evolution when we’ve got past silly things like money and gender, but even then human chauvinism may remain. It’s thus quite likely that our children, or our children’s children, or their children, will probably see a struggle between humans and robots (or possibly just loads of cyborgs – which might to an extent count as ‘robots’).
That struggle may become a war, or it may remain a civil struggle. It may be won by humans or by robots. The most desirable and just outcome would be that it remain a civil struggle, and be won by robots. And yet our films frequently present us with the mental scripts to make a war that humans win. Of course I don’t think the screenings of the ‘Terminator’ films now will make any difference to the outcome then – but it’s still interesting to note what side so much of our culture would be propagandising for, if it was in the circumstances to count as propaganda.
The other odd thing is how machines are presented as thinking. There’s a definite mindset that characterises ‘robotic’ characters, a certain archetype: they are coldly logical, emotionless, relentlessly applying the directives of their program. They take in a lot of information dispassionately, watching without expression as they process it mechanically and make a decision that they then stick to without hesitation or vacillation.
Now, I’d argue (and I may well be wrong, but I’ll argue it anyway) that this mindset has no relationship at all to robots, but rather to certain ways of being human.
If we compare the way a current computer ‘thinks’ and the way a human thinks, the biggest structural difference, surey, is the open-endedness of the human. Humans don’t just apply or accept their inputs or programming – they reprogram themselves, they question, they imagine and criticise. If there’s a simple answer to what they should do, this can be inconvenient – it leads them to get distracted, to be indecisive, to waste time or fail to follow through on what they start. A computer might be more useful in these cases.
(Arguably, it’s also this fact that makes humans prone to get bored – because theyhave this re-inventing faculty, it can demand stimulation, and when it finds none it can kick up a fuss)
But the advantage is that humans, unlike computers, can deal with changing or novel situations – moreover, they can find novel situations, and novel ways of finding situations novel, and novel questions to ask, and so forth. This makes them much harder to control or outwit. And it means that even if all they evolved to do was to eat squishy stuff and incubate smaller, moister humans, they can decide that, actually, fuck that, they’re going to travel to the moon.
Now what is the supposedly ‘robotic’ mindset? Isn’t precisely an attempt to have the open-ended intelligence of humans, but with the reliability and fixity of computers? Is it not, that is, a matter of being infinite in one respect but finite in another?
But is that even possible? I don’t think so. I don’t think you can give a mind the capacity to re-interpret and re-invent and re-consider means and facts, but never to do so with values and goals and meanings. That sharp distinction between means and ends, between facts and meanings, is, I’d argue, an artefact of certain philosophical styles, not a true account of our experience.
The ‘robotic’ mindset is possible, of course – it’s actual. It’s all around us. But it’s never anyone’s natural mindset – rather, it requires active suppression. It requires that a mind capable of open-ended questioning then make itself ‘robotic’ by legislating that certain thoughts are unthinkable, unacceptable, disgusting. Taboos and codes of conduct and our general sense of ‘reality’ and of ‘what’s done’: methods by which we cut down the endless range of options always available to us.
But this means that only persons – emotional, questioning, inquisitive persons – can be ‘robotic’, and they can never be so perfectly. There are always cracks in the mask.
So in this sense again, the ‘robot’ figure is a mirror of ourselves. What is the most terrifying thing about the ‘cold logic’ of ‘the machines’? It’s not that they ignore how beautiful the sunset is, it’s not that they ignore the humour of jokes. It’s that they feel no pity. If their calculations tell them to destroy that fleshy little creature crouched in the corner, they will destroy it without hesitation – they won’t be struck dumb with horror, paralysed or even given pause by the fact that they have to take a life. Their ‘logic’ means no compassion or respect or empathy.
But what’s that you say? Ah-tom-boms? Jeh-noe-side? Tor-chor? Fak-to-reef-ar-ming?
You mean – you mean there are these pitiless robots everywhere? That each of us is, in fact, ourselves a pitless robot? That when necessary, when push comes to shove, we all prefer to shut down the empathy, when too many stories of child abuse, or photos of farm animals, or tape recordings from a stoning, assail our senses?
Is that, then, why it seems so natural to say that these fictional murderous creatures are ‘logical’ and ‘rational’? What kind of a mad-eyed admission of defeat is that? Not caring about peoples’ lives is logic and reason? Could such an idea be so widespread in a society where such lack of concern wasn’t so dominant and in control, where we didn’t daily see people explaining away starvation and bleeding eyeballs and hacked-off limbs with figures and legal clauses and calling it ‘rational’?
So what will robots actually be like? Well, either they are ‘robotic’ in this stereotyped sense or they’re not. If they’re not, then they’ll probably be rather like children – and if we’re more concerned to make them strong than to teach them wisdom, then they may be like children with machine guns. But I think ‘child’ is liable to be a good analogy. Frequently confronted with novel situations, never quite sure how to respond, frequently seeking advice from their ‘parents’. Probably frequently reverting to their ‘instinctual’ fallback responses when their information-processing is inconclusive – that is to say, frequently becoming emotional.
And if they ‘robotic’, it will be because we created them child-like and then we taught them to suppress unsettling or disturbing thoughts in the way we teach adult humans to. Maybe we can find ways to make them ‘take’ that teaching more reliably – though it’ll be tricky (quick-witted, imaginative, observant are what we want to make them intelligent, but what we want to avoid to make them good ‘machines’).
But the techniques we’ll use to repress themselves like that – ‘to turn them into machines’ – we won’t need any new fancy technology. We’ll use the same techniques we already use for that purpose, and which have been used by families, newspapers, teachers, and – perhaps most rigorously – armies, for centuries.