You are viewing a single comment's thread from:
RE: Are Robots Going to Kill Us All ?
The thing about Asimov's Laws is that you don't have a real way to build them into the robots algorithm that well. There are fail-safes, but the more complex the system gets and the more learning it has to do, the more complex and unreliable an internal fail-safe becomes. In a way, you would have to teach Asimov's Laws like you teach a toddler to behave, so you have not strict guarantees.
What you might do is have a second independent algorithm that has access and control over the knowledge database the robot (or probably fleet of robots) has built up and constantly checking for things that can be potentially harmful to humans. But will it work?
I hope people much smarter than me will have figured it out by then.