How the Robots Take Over

in #technology8 years ago

I have read a lot of things about AI, and heard a lot of concerns about an AI takeover, and I have to admit, I was skeptical. This was until last night, when I had a discussion with an Italian guy about methods AI could use to circumvent our preventative measures. I am not revealing anything that hasn’t been thought of before, I just thought I’d share my insight in the hope some of you would find it interesting.

I am sure a lot of you will have heard of Isaac Asimov, or at least seen the film I.Robot. He among other things was an author who wrote science fiction stories with the central theme being AI. He came up with 3 laws to protect humans from AI, and the stories usually involve ways the AI seemed to break the laws but was actually interpreting them in an interesting way. For example in the film I.Robot, which has the same 3 laws - 1 of which is to protect humans - the robots start attacking the humans, which seems to break this law, but the AI’s reasoning was that humans were destructive and needed to be controlled to ensure their prolonged survival.

Anyway so the thing that made skeptical was the fact that in these stories, AI / robots had been given the ability to walk around, control large human systems (Skynet) without any human input and basically just have free roam. I dismissed this as careless, and assumed that when we get anywhere near close to human level AI we would have the system sandboxed, not connected to the internet, in a room where we would just ask it questions and take it’s answers into consideration before executing them.

This I discovered is not a fool-proof plan. Let’s pretend we have a super intelligent computer in an airtight, high security container where scientists ask it questions. Firstly let’s ask ourselves why we built this computer. The whole point of computers is to increase efficiency and productivity in our society (video games aside). So if we build an AI, we can practically assume that it would be used to the same end. So questions we might ask it would be things like “how do we reduce traffic?”, “how do we increase the performance of x?”.

A mathematical answer to most productivity problems is to have a totalitarian superstate. To turn the world into a giant, super efficient factory, with strict rules and regulations. So even if you ask the computer “how do we sell more shoes?”, it’s answer will probably relate back to that. If instead of asking the computer an answer we expect in at the form of an essay, we in fact ask it a question that we expect the answer in the more likely format, a design. We are dangerous waters.

If the computer realises that the humans won’t use a design for something that goes against their morals, it could very easily add some surreptitious, hidden feature. This way the scientists would look at the design and go “oh that looks great”, we’ll use that. Only to find out that the hidden feature gives the super computer access to the internet (the computer figured that if it could get onto the internet it could make the design more efficient, but realised that the humans wouldn’t let it do that, so had to work around that).

So a computer at sub-human AI is not very dangerous because it wouldn’t be able to easily outsmart us with a hidden feature, but seeing as the AI will continually be improved, we might find out that it has become smarter than us the hard way.

I always try to end these articles with my idea for a solution to the problem. The best I can come up with right now is a huge E.M.P blast mechanism (powerful enough to take out all the world’s computers) that is not connected to the internet and hope the AI doesn’t find out where it is (it will probably work out we have one somewhere once it learns about our paranoid nature). Good luck with that!

Follow me for more science, technology, futurism, sociology, politics and other nerdy things!

Sort:  

even if technically AI could take over the world, it is not practical.
we look at computers (and in general AI )in a way that they are like humans but not as sophisticated as humans, I beg the differ, Humans are totally different, and the reason that I think AI is not going to take over the world is they AI can't emulate ambitious and selfishness(two important factors that make humans to seek the power, go on wars, commit the crime, do evil, etc...), and without these two factors there is no motivation for super computers and AI to take over this messed up world

Yes, I too had those thoughts about ambition and selfishness, but as the article suggests, they are not needed.