You are viewing a single comment's thread from:
RE: If We're in a Simulation, Could Studying Morality and A.I. Improve Your Life?
Cool argument, it's essentially a converse Roko's Basilisk, right?
Cool argument, it's essentially a converse Roko's Basilisk, right?
I hadn’t thought of it that way, but that’s interesting. I was more thinking about humans running simulations to better understand how they should create A.I. But if the A.I. take over the simulation, that brings us back to an inverse Roko’s Basilisk.