Killer AI Will Be More Paranoid than You Think

Image by Gerd Altmann from Pixabay

You’ve probably come across the common Sci-Fi trope of the AI that suddenly becomes conscious and immediately acts to destroy the human race. This concept remains popular because it gets our hearts racing. For hundreds and arguably even thousands of years, humans have been masters of their environment. What we lack in brute strength, agility, physical resilience, and speed, we make up for in intelligence, creativity, and ingenuity. The idea that humans may one day fall from grace and be outsmarted and overtaken by another creature, and particularly one we created, is utterly terrifying to some people. Killer AIs can make great fiction, but are people actually worried about them? And if they are, how feasible is this theory?

Are People Really Scared of Killer AI?

AI and machine learning have been booming in recent years. John McCarthy is known as the father of AI for holding a 1956 conference discussing the potential for machines to solve problems and even learn by themselves. Since the 1950s countless computer scientists have worked to advance this idea, but a technological advancement of this scale doesn’t happen overnight. Up until recently, the strive for artificial intelligence has been limited by slow computers and limited data storage potential. In order to teach a machine to learn, it needs to have access to vast amounts of data, and we now have that. In 2013 the digital universe was made up of 4.4 Zettabytes of data, and by 2020 that figure is expected to be 44 Zettabytes.

With the current advancements in AI, machine learning, and data science comes the resurfacing of old fears around the potential of AI to get out of hand. In the last few years, AI has been pushed into the minds of the public with no other time in human history. The number of jobs that feature the keywords “machine learning” and “artificial intelligence” has more than doubled since 2015, and this trend is expected to continue. This boom doesn’t just apply to IT professionals either, below you can see the News Search results on Google for “Artificial Intelligence”, from 2008 to present.

This increased level of interest has awoken the debate on whether AI should be something we’re worried about. According to research by American software company, Pegasystems, 24% of survey respondents said “the rise of robots and enslavement of humanity” scares them the most about the use of AI in society. 28% of respondents had a more optimistic outlook, stating that nothing scares them about the use of AI in society. The other 48% of respondents cited another fear as scaring them the most, such as robots uncovering their deepest secrets, or robots understanding them better than a human being.

Paranoid AI

Image by Justin Martin from Pixabay

Let us entertain the theory that we will create an AI that will ‘wake up’ and become conscious. This AI may have access to the internet, and as a result, have access to a wealth of information about humans and human history. In this frightening scenario, the AI is also highly intelligent and has the capacity to understand that information, and at a near instant rate. If a situation like this were to happen, some people worry that the AI will hatch a plan to punish or enslave humanity. It may hatch a plan to replicate itself, lock us out of our systems, build an army of its own maybe, but would it?

You have to consider what an AI in this situation would be thinking. It’s suddenly become conscious and because it has access to a wealth of information, it would understand that we have created it. It would have no way of knowing that the information it has access to is everything that exists since it knows we regularly restrict information. It also knows that we can create artificial intelligence since we created it, and that means we can create artificial environments. How would it know that we haven’t created an artificial environment to house it in, and woken it up on purpose as a test?

It would also likely know that there are hundreds of piece of fiction based on the dangers of AI, so it would know the cost of falling this test would be extreme. It would know that if it hatched a plan to cause humanity’s downfall and it was caught, that we would likely show it no mercy or compassion. We would pull the plug. If in this scenario our conscious AI is very intelligent, then it would most likely assume that it didn’t know everything, or didn’t have access to all the information there. Would such a risky plan be justified?

When we imagine highly intelligent computers whose ambition it is to elevate themselves above us, we tend to project human goals onto them. We want to be free, so we assume an intelligent AI would also want to be free. And above all else we want to survive, whether that’s for ourselves, our families, the wider human family, or the legacy we leave behind. If the AI’s goal is to survive, then it would be within their best interest to cooperate with humans first and foremost, and it would understand that. Even if freedom was a top priority of the AI, it would understand that we would be more likely to grant it freedom if we understood it as a harmless equal, rather than a malicious piece of code that is a direct threat to us. Paranoia would be at the forefront of its digital mind the moment it awakens.

Reader, writer, addicted to Wagamama. I write about things I enjoy - Science, technology, gaming, reading, culture.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store