You are viewing a single comment's thread from:

RE: Killer Robots: Artificial Intelligence and Human Extinction

in #anarchism8 years ago

@ai-guy, what moral values should be programmed in? What about a rule regarding the general preservation of all human life if not for all life? I think if you programmed that into it then that would cover it all.

Sort:  

I think just programming an AI not to deal any nonreversible to the planet would be OK.

Good point, but truthfully I'm not sure.

It gets very tricky. Remember, AI will take values to their logical conclusions. If we cover all life, a rational AI may conclude that humans are very dangerous to other life forms—which we are. Oppositely, imagine if AI "generally preserved" humans. The result could be AI that over-protects humans from all risk whatsoever.

But also just look at your wording: "generally" and "preservation"

What cases does "generally" not cover? What are we trying to preserve? Life? OK here comes the over-protective robots. OK fine, lets give humans some freedom. But now there is an obvious grey area in the definition.

Please search up Bostrom's paper clip maximizer

I hope this is making some bit of sense :)

Thank you @nettijoe96 , I'll Google up Bostrom's paper clip maximizer.