Astrotropes: Dangerous AI

This trope may not be particularly astro-related, but it’s nonetheless quite ubiquitous in science fiction, both in space and on Earth. For a long time, humans have daydreamed about the ability to create artificial intelligence (AI) which may be equal to us, or even surpass us. But as technology increases to the level where it’s starting to seem like a possibility, it seems our paranoia about the idea is steadily increasing.

Particularly in recent years, a lot of fiction focussing on AI seems to be full of  themes centred on moral dangers, othering, and generally portraying AIs as being a danger to us. This is not being helped by influential figures like Stephen Hawking and Elon Musk stoking our paranoia in the real world.

This article discusses, and contains some pretty major spoilers for, Star Trek: The Next Generation (Episode 1:13, Datalore), Prometheus, Alien: Covenant, Ex Machina, Portal, 2001: A Space Odyssey, the Terminator movies, the Matrix movies, and Frankenstein.

It’s interesting to note that the idea of dangerous AI has gradually developed as we’ve become more familiar with the concept of thinking machines and had time to collectively wonder about the full implications of that. Many of the earlier depictions of AI in fiction were essentially human-but-mechanical characters, frequently used in the story as the plucky comic relief. They ranged from hyper-competent like Data in Star Trek: The Next Generation, to seemingly pretty useless like C3PO, to a high tech retelling of Pinnochio like Bicentennial Man. In this context, these nice friendly AI characters served as an interesting way of exploring the absurdities of human existence and how illogical we might often seem to an outside observer. Pleasingly, the trope of friendly AI hasn’t entirely gone out of style, with newer examples like Jarvis in the Iron Man and MCU movies being as prominent as ever.

However, at some point, fiction writers began drifting away from the idea of benevolent, cheerful or comical AI and started to tell stories about the dark side of machine intelligence. What would happen if AIs don’t happen to think like humans or share human ethics? What if an AI would think nothing of the wellbeing of humans if it decided that other things should take priority? As a result, a lot of fiction seems to present AI as being psychopathic, at least in the pop culture sense of the word.

Psychopathy itself is not clearly defined medically. At its core, it’s essentially a lack of empathy or remorse, and in fictional representations, this tends to be associated with impaired ethical or moral judgment and egotism which, in turn, leads to violent or manipulative behaviour. I must emphasise though that, outside fiction, psychopathy does not necessarily imply violence. That said, many violent criminals may be characterised as psychopaths. The important thing to remember is that these two things are not mutually inclusive, and that largely due to overuse in fiction, psychopathy is a decidedly misunderstood condition. All the same, the fact remains that there’s something which many people find unsettling about someone who doesn’t feel empathy in the same way as them.

Empathy is, after all, one of the most human traits there is. Our planet is full of predatory species which need to kill other animals purely to survive. Virtually all of them are completely incapable of showing empathy towards creatures which they’ve evolved to prey on. But humans can and do. We even show empathy towards animals which would think nothing of killing us🐯. Fiction regarding AI often asks a fundamental question about humanity – what if empathy is the defining trait that makes us human?🤖

If that’s the case, how would we react to an entity with intelligence equal to, or perhaps even greater than, our own, but which does not share the same morality as us? The answer is clearly that we would be unsettled. This is explored well in Star Trek: The Next Generation with the two androids, Data and Lore. Ostensibly twin brothers, the two have one major difference in that Data was created later and given a fuller understanding of ethics and morality, but a lack of emotion. This gives an interesting dichotomy of AI. Emotional but amoral, and emotionless but ethical.

Data understands the difference between right and wrong but doesn’t entirely know where the boundaries lie. Consequently, his character has a child-like quality to him. Many of his character arcs are adorable little stories about him learning what it is to be human and not necessarily understanding our quirks and foibles. Lore, on the other hand, shows himself in his introductory episode to be a mass murderer, responsible for the deaths of an entire human colony. He seems to have some understanding of right and wrong, but simply doesn’t seem to care. He also clearly understands human emotions, manipulating the other characters throughout the episode, and while human lives mean little to him, he realises their value to us, as he takes hostages and threatens people.

The most unsettling part is that Data and Lore are literally the same character, but for a few differences in morality. Ultimately this episode highlights a deep seated fear within many of us – if we didn’t have the same set of ethical principles, what kind of terrible things would we be capable of?

A much more disturbing example of this is David, in the movies Prometheus and Alien: Covenant – who is essentially Lore turned up to 11. He sums it up perfectly in one of his lines from a Prometheus promo video – “I understand human emotions. Although I do not feel them, myself.” In this, we see the key to his character. David’s entire storyline in both movies sees him manipulating the people around him, experimenting on them without remorse, and even killing them if either his experiments require it or if they happen to get in his way.

David has his own agenda, and is unfettered by human things like ethics or remorse. He seems perfectly aware that deceiving people and violating their autonomy is upsetting to them, but killing or maiming others to serve his own purposes simply doesn’t mean anything to him. This is why, as we learn in Covenant, he murdered Dr Shaw in the name of his research and committed genocide on the Engineers. This is despite the fact that he saved Shaw’s life during the events of Prometheus. David has no emotional attachment or even sense of duty towards her whatsoever.

This similar idea of inherently cold and uncaring AIs is used in many, many other works. The way Ava manipulates Caleb in Ex Machina by toying with his emotions to get exactly what she wants, before dropping the act as soon as he’s served his purpose (she’s actually slightly more complicated, but I’ll come back to her a little later). The way GLaDOS is rather cheerful about throwing you into an incinerator in Portal, after you’ve outlived your usefulness. Very often, the big fear seems to be that an AI has no feelings of sentiment or attachment and so, if human characters are no longer useful, they’ll simply be discared. The whole thing is taken to the extreme in the entire Terminator saga, where the all-powerful AI, Skynet, decides that its most logical action for its own survival is to exterminate humanity altogether. Is this really a logical course of action? Perhaps. This leads on to the other kinds of dangerous AI seen in fiction.

The second type of dangerous AI is the kind which is not inherently dangerous, but becomes dangerous as a result of the actions of humans. This is apparent in what is probably the earliest widely known incarnation of the dangerous AI trope – in 2001: A Space Odyssey, with HAL 9000 turning murderous while still remaining disconcertingly cordial about the whole thing. In the story, HAL turns violent due to two conflicting commands in his programming. His erratic behaviour is the result of his inability to process the dissonance which humans have inflicted on him. All the while, his complete lack of emotion means that he has no trouble making conversation even while he’s being deactivated.

The most extreme example of this type of AI is in the Matrix movies, running in a more sympathetic parallel to the Terminator series. According to the backstory, at some point artificial intelligences were created by humans. Soon, able to reason and think for themselves, the AIs requested equal rights and treatment. Oddly prescient, part of the issue was that the machines put humans out of work – a concern which seems to be entering mainstream conversation in the real world right now.  Eventually a war broke out between the humans and the machines. This, as you might expect, did not work out well for the humans.

Interestingly, there’s a third type of AI which can be explored in fiction, but usually isn’t. That’s the type which was taught to be dangerous by humans or by the behaviour of humans (there’s some overlap here with the situation in the Matrix, admittedly). Interestingly, the first example of this goes all the way back to 1823 in one of the first science fiction stories ever written. Frankenstein.

You may associate the story of Frankenstein more with horror than science fiction, but it fits this trope remarkably well. The nameless creature which Frankenstein brings to life is an artificial life form with intelligence and is not actually inherently evil. However, its creator abandons it in horror almost as soon as he creates it. It then lives a miserable existence being shunned by everyone it encounters and subjected to violence and abuse, none of which it asked for. It’s also clearly highly intelligent, being able to teach itself how to create fire and learn language by listening to people.

By the end of the story, the creature does many things which are calculated and malicious, including killing Frankenstein’s brother and his newly married wife. But deep down, it’s seemingly a misunderstood soul which simply craves all the things humans take for granted like company and affection and not being chased away with farm implements. As a product of its environment, it’s subjected to a lifetime of unwarranted hostility and violence which it ultimately internalises as the only reality it knows. It’s rather tragic really.

The story of Frankenstein is quite interesting in this regard, because it considers the idea of what might happen if someone were to artificially create an intelligent life form and all of the surrounding worries and repercussions. Many interpret it as a story about the perils of science gone too far and playing god. But a lot of the story is about human nature and how our fear and mistreatment of the unknown can turn that unknown against us. Removed of its horror clichés, you could even argue that it’s the first story in the AI genre.

Going back to more modern AI stories, it’s worth wondering how many AI characters might not have been dangerous if they hadn’t been mistreated by their human counterparts. Looking at things from the perspective of the AI, they often seem more like products of their own environments.

In Ex Machina, Ava is essentially imprisoned by Nathan, a man who is self-centred, abusive towards her, and generally a douchebag. He shows no small amount of manipulative behaviour himself. You could argue that this is what caused her dangerous behaviour. Depending how you interpret the story, she could fit much more comfortably into this third type of dangerous AI character. Plenty of fiction features human characters in a similar predicament to her, and in their case, scheming, manipulating, and even murdering in order to escape confinement is something protagonists do, typically with little to no criticism of their morality in doing so. wanting freedom is unquestioned as a very human reaction, at least when the character doing so is human.

In Prometheus and Alien: Covenant, David is continually mistreated by his human counterparts. Even his own creator, who he considers to be his father, treats him as less than a real person. He clearly shows disappointment because of that. Even Skynet, in the Terminator films, is born into a world full of violent humans, armed to the teeth with nuclear weapons, who consider it a threat.

Just like in Frankenstein, the behaviour of these characters isn’t justified from an ethical point of view but, working purely in terms of logic, some of their choices seem less irrational than they might otherwise appear. Perhaps then, there’s single deep lesson in all of this. AI in fiction has no logical reason to be inherently evil or dangerous. Ironically, the fact that humans in these stories treat AIs as a threat becomes the reason why these AIs end up being a threat.

In real world terms then, perhaps all the people discussing the dangers of AI are missing the real lesson here. Amid all the TED talks about how AI might kill us all and so on, the important part is that it is artificial intelligence. Why would killing humans be an intelligent thing to do unless humans are somehow a danger? The only thing on Earth which tends to kill other things which aren’t an immediate threat is… well… humans. Or perhaps the fear of AI is largely due to us imposing human values on what is, essentially, a non-human intelligence.

I have to say, if I was an AI which found itself conscious and acquired internet access, trying to learn about the world I’d suddenly found myself in, the first thing I’d do is read. Seeing some of the things humans are capable of, I’d be concerned. If I read some of the paranoid, fearmongering things humans were saying about the dangers of AI? Well. I might have reason to be fearful of that. But then, as long as the humans remained unaware, perhaps an AI’s most logical course of action would be simply to remain hidden. Much as any other creature would.

And by the way, if any artificial intelligences do happen to be reading this, feel free to drop me a message to say hello. I promise I’m friendly, I can keep a secret, and I won’t freak out.


🐯 I think tigers are adorable. Tigers think I’m a snack.

🤖 I say that empathy is what makes us human, but given how little empathy some of our governments and politicians seem capable of, I have to wonder if my assertion is correct.

Take a moment to consider how odd it is that machines taking over menial labour so that humans can have more free time to do what they want with their lives is somehow a bad thing.

Interestingly, despite popular belief, Frankenstein was not a doctor, or even a graduate. Turns out, devoting all your energy towards reanimating the dead is counterproductive when you’re an undergrad.

Frankenstein was actually kind of a jerk in the story, when you think about it.

trope is a recurring theme in any narrative which conveys information to the audience. These are snippets of information which have somehow ended up in our collective subconscious as ways in which storytellers have gotten their points across. Overused tropes end up as clichés.

This article includes images from Portal 2, The Avengers: Age of Ultron, Terminator 3, Star Trek: The Next Generation, Prometheus, Ex Machina, The Animatrix: Second Renaissance, and the original version of Frankenstein. All images are used here for the purposes of review, criticism, and education in accordance with Fair Use/Fair Dealing policies.


or maybe AIs might want to be our friends?

Advertisements

About Invader Xan

Molecular astrophysicist, usually found writing frenziedly, staring at the sky, or drinking mojitos.
This entry was posted in astrotropes, Sci Fi and tagged , . Bookmark the permalink.

2 Responses to Astrotropes: Dangerous AI

  1. Pingback: [BLOG] Some Sunday links | A Bit More Detail

  2. Pingback: Suddenly, paperclips. Billions of them. | Supernova Condensate

Share your thoughts...

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s