You are viewing a single comment's thread from:

RE: Are Robots Going to Kill Us All ?

in #steemit8 years ago

Very good points rocking-dave.

Also I find myself in agreement with you. It would be optimal if firmware fail-safes not dissimilar to Asimov's Laws of Robotics were adopted as standard.

The only issue here being that such laws would serve as an impediment to the building of true trust between humans and machines - as humans could never know if the machine genuinely wants to help humanity - or just cannot work against humanity due to its human-in-built fail-safes.

This is an ethics question. Perhaps the answer to it lies in weighing rights against responsibilities - and being able to reliably assess such. This long with the prohibition of factors of might unless 'earnt' could reduce the risk.

Such is no different from a human requiring a licence for a gun.

Sort:  

The thing about Asimov's Laws is that you don't have a real way to build them into the robots algorithm that well. There are fail-safes, but the more complex the system gets and the more learning it has to do, the more complex and unreliable an internal fail-safe becomes. In a way, you would have to teach Asimov's Laws like you teach a toddler to behave, so you have not strict guarantees.

What you might do is have a second independent algorithm that has access and control over the knowledge database the robot (or probably fleet of robots) has built up and constantly checking for things that can be potentially harmful to humans. But will it work?

I hope people much smarter than me will have figured it out by then.