You are viewing a single comment's thread from:

RE: Does Freedom Require Radical Transparency or Radical Privacy?

in #eos7 years ago

The risk is what? That you get judged and killed? Humans do that on a regular basis, on a mass scale. Humans have already done genocide on multiple occasions so where with AI you have movies to base your fear on, with humans we have history. Which is worse?

However, I do agree with him on how risky / scary using AI for such decisions would be. The one thing usually overlooked by those who discuss AI is human influence. I mean the AI is programmed by humans, so don't you think that will bias it? Even if it is created by a collective of humans, there are still only a few that decide what code gets included and which does not. Those humans bias the design & functionality of the resulting AI and I contend we can't ever remove the human component of our creations, AI being one of them.

You can program and personalize your AI to adopt your morals. So when you say AI is programmed by humans this doesn't mean the AI or the programming has to be done by a centralized group of humans. Simply allow the individual to tell the AI their interests, their values, and ask the AI questions about what to do.

My implementation of machine enhanced decision support

When humans have to make big decisions they traditionally would seek advice from people with more experience. The problem with this is not all humans are socially wealthy enough to have people they can trust with more experience to get advice from. The President has advisors for example, and CEOs have board of directors, but some kid growing up in the slums somewhere has only themselves because there aren't any mentors. In terms of improving morality, I never really specifically put it in that way but more if you improve decision making capability you can indirectly improve the capacity for moral behavior.

So a cyborg without any human mentors in their life can simply ask the crowd. We see this now with Quora for example and other technologies which let you ask the crowd. We also see it on Facebook where a random poster will ask the crowd. That is how cyborgs make decisions and that is in essence mining crowd sentiment manually. The problem with that is what if you aren't clever enough or mature enough to even think to ask the crowd? Or what if the crowd is biased, ignorant, superficial, etc?

Ask the machines and the crowd is the solution I propose. Asking the machines is in essence asking the AI for advice. The AI becomes the best friend, the mentor, the father or mother figure, the big brother, the religious or spiritual advisor. The AI takes the place of a human being in this instance to help the individual cyborg (human with a smart phone and Internet connection) make wiser decisions.

In my implementations it would be up to each human to determine their own values, their own interests, and their own level of trust in AI. Some humans for example only care about what the crowd thinks and simply will tell the AI to give them the latest sentiment analysis on what each decision will be perceived like by the majority of the crowd. Other humans might be mostly concerned with their own survival, freedom, and happiness, and might direct the AI to help them decide what to do so as not to take unnecessary losses or excessive risks to their interests. Finally you might have some who trust the AI so much that they completely merge with it, and let the AI dictate morality completely.

AI offers a benefit of being a potential character witness as well. Also if a person was following the moral and legal advice of AI, how culpable will they be in court? I mean if the AI is smarter than everyone in the courtroom then it's a bit of a different kind of trial is it not? Amoral in human terms could be what we could say the justice system currently is.

The questions I can ask are, do you want to survive radical transparency? Do you think you have better odds of surviving it as an unenhanced human, or as a morally enhanced cyborg?