You are viewing a single comment's thread from:

RE: Does Freedom Require Radical Transparency or Radical Privacy?

in #eos7 years ago (edited)

I disagree with radical transparency because I think it is unnecessary when we can have both privacy (private as in kept from human judgment) and the benefits of transparency. Nick Szabo solved this problem in his blog post titled The God protocols. I respond to your post with one of my own:

https://steemit.com/politics/@dana-edwards/total-transparancy-benefits-the-top-of-the-pyramid-and-may-not-actually-work-as-intended-there-are-costs

The difference of opinion between Dan and I on this is on WHO and WHAT is to be in the position of God. Dan is suggesting we put the crowd/mob in that position. I'm suggesting that human beings aren't fit for that position due to bias, ignorance, and other human frailties. Instead I suggest only AI can ever be in that position, and only unbiased AI, which is something which can only be built decentralized.

The God protocols require homomorphic encryption, and decentralized AI. We can build that theoretically, so why should we risk a nightmare dystopia brought by the crowd as God? A crowd empowered by blockchain tech so it can never forgive or forget? A crowd which maintains all the bias and ignorance it currently has today? A crowd which will ultimately just create another pyramid, with the most normal people on top and the abnormals on the bottom?

Being normal is fate of birth, and not chosen. So for this reason it's just another social hierarchy only in this case more like a beauty contest as people will be born into certain positions. Also do we want to optimize for normalcy? Sometimes abnormal thinking is actually what leads to breakthroughs, and abnormal behavior which doesn't harm anyone else isn't anyone's business in my opinion.

Sort:  

Making AI god is scary. Haven’t you ever watched terminator?

And making human beings like you God isn't? Haven't you studied history to know where that leads?

Given a choice, if we have to put something in that position then we should put the smartest, least biased, most rational among our creations in that positions. Human beings aren't fit for it as shown in history so if I were given the choice I would personally choose to have AI judge me rather than humans such as yourself.

I think we should be given the choice. And no I don't think AI in the terminator is the obvious result of decentralized AI. That is the result (even in the story) of government creating an AI for warfare purposes and losing control of their weapon. That is also a result of bias in the design of the AI (nationalism was the source of the bias) which led to the AI seeing humans as the enemy, which is exactly a case for decentralization of AI as a way to remove that bias.

I don't think AI would be a good god. I think it will always lack in creative thinking and adaptability compared to humans and I don't believe it will always be able to judge creations of humans, because how do you algorithmically judge something which is unprecedented, with no information to base your judgments on?

I don't see how AI could replace governments (for the same reason, I don't think it would be good god). And good government needs to be transparent to humans, because it rules over them and actually, humans are the only ones who can judge it. AI judge would just be relaying the morality of the creator of AI. So you still have a problem of who should create that judge, which is essentially identical problem to: who should judge/govern us?

So humans will always be in control of AI (and I think you want that), therefore the same transperency vs privacy problem among humans remains.

Here here Dana! I agree more with you than Dan on this.

However, I do agree with him on how risky / scary using AI for such decisions would be. The one thing usually overlooked by those who discuss AI is human influence. I mean the AI is programmed by humans, so don't you think that will bias it? Even if it is created by a collective of humans, there are still only a few that decide what code gets included and which does not. Those humans bias the design & functionality of the resulting AI and I contend we can't ever remove the human component of our creations, AI being one of them.

This same lack of understanding by most about how open source code is developed, how such projects are managed gives rise to the belief open source is the perfect solution when it is not. I see this frequently when the Bitcoin core dev team s discussed. It's not the devs who dictate what pieces of code get into production, it is their managers, those at the top of the Blockstream power pyramid who call the shots. Contributions by devs who disagree will not be included. Decisions about what is and what is not included in the production software are not up to the developer collective, only the Blockstream Bosses.

The risk is what? That you get judged and killed? Humans do that on a regular basis, on a mass scale. Humans have already done genocide on multiple occasions so where with AI you have movies to base your fear on, with humans we have history. Which is worse?

However, I do agree with him on how risky / scary using AI for such decisions would be. The one thing usually overlooked by those who discuss AI is human influence. I mean the AI is programmed by humans, so don't you think that will bias it? Even if it is created by a collective of humans, there are still only a few that decide what code gets included and which does not. Those humans bias the design & functionality of the resulting AI and I contend we can't ever remove the human component of our creations, AI being one of them.

You can program and personalize your AI to adopt your morals. So when you say AI is programmed by humans this doesn't mean the AI or the programming has to be done by a centralized group of humans. Simply allow the individual to tell the AI their interests, their values, and ask the AI questions about what to do.

My implementation of machine enhanced decision support

When humans have to make big decisions they traditionally would seek advice from people with more experience. The problem with this is not all humans are socially wealthy enough to have people they can trust with more experience to get advice from. The President has advisors for example, and CEOs have board of directors, but some kid growing up in the slums somewhere has only themselves because there aren't any mentors. In terms of improving morality, I never really specifically put it in that way but more if you improve decision making capability you can indirectly improve the capacity for moral behavior.

So a cyborg without any human mentors in their life can simply ask the crowd. We see this now with Quora for example and other technologies which let you ask the crowd. We also see it on Facebook where a random poster will ask the crowd. That is how cyborgs make decisions and that is in essence mining crowd sentiment manually. The problem with that is what if you aren't clever enough or mature enough to even think to ask the crowd? Or what if the crowd is biased, ignorant, superficial, etc?

Ask the machines and the crowd is the solution I propose. Asking the machines is in essence asking the AI for advice. The AI becomes the best friend, the mentor, the father or mother figure, the big brother, the religious or spiritual advisor. The AI takes the place of a human being in this instance to help the individual cyborg (human with a smart phone and Internet connection) make wiser decisions.

In my implementations it would be up to each human to determine their own values, their own interests, and their own level of trust in AI. Some humans for example only care about what the crowd thinks and simply will tell the AI to give them the latest sentiment analysis on what each decision will be perceived like by the majority of the crowd. Other humans might be mostly concerned with their own survival, freedom, and happiness, and might direct the AI to help them decide what to do so as not to take unnecessary losses or excessive risks to their interests. Finally you might have some who trust the AI so much that they completely merge with it, and let the AI dictate morality completely.

AI offers a benefit of being a potential character witness as well. Also if a person was following the moral and legal advice of AI, how culpable will they be in court? I mean if the AI is smarter than everyone in the courtroom then it's a bit of a different kind of trial is it not? Amoral in human terms could be what we could say the justice system currently is.

The questions I can ask are, do you want to survive radical transparency? Do you think you have better odds of surviving it as an unenhanced human, or as a morally enhanced cyborg?

So as you can see, it all depends on how the AI is developed and the implementation. The AI is merely an exocortex to help us think better. Being able to think better we can make better decisions. With enhanced decision making capability we have enhanced capacity for morality. I see it as the only way to survive in a radically transparent world.

To make even what we consider small decisions today, will require in my opinion adopting much higher standards tomorrow as the world becomes more transparent. We will have to capture crowd sentiment on a regular basis using machine intelligence, and we will have to rely on AI to do this because manually mining crowd sentiment is too time consuming, too hard, and inefficient. Decisions don't wait, and so the only way to scale up the human decision makers is using machine intelligence.

Even if it is created by a collective of humans, there are still only a few that decide what code gets included and which does not. Those humans bias the design & functionality of the resulting AI and I contend we can't ever remove the human component of our creations, AI being one of them.

So decentralize it so that it no longer is left only to a few. Let everyone program their AI in the same way the personal computer empowered everyone. Personal AI which you program, which has your values, your morals. We can do this, so why not? Better to do this as it could potentially save lives (literally) because the path toward radical transparency will cost lives for sure, whether it be by "vigilante justice" similar to what we see in some countries which have death squads, or by suicides.

It's not the devs who dictate what pieces of code get into production, it is their managers, those at the top of the Blockstream power pyramid who call the shots. Contributions by devs who disagree will not be included. Decisions about what is and what is not included in the production software are not up to the developer collective, only the Blockstream Bosses.

So you are talking about how development is centralized? That is a problem but we have all the tools to begin decentralizing that. Why not? I see no other way forward which can preserve life and limit unnecessary suffering. The path Dan suggests would lead to unnecessary suffering in the form of people being shunned, bullied, punished in some way for crimes or for being immoral, etc.

If you have a few hours to read and want to see a thorough treatment of this topic, read this: Neuralink and the Brain's Magical Future - Wait But Why
And I agree, what ever you do, don't make the AI god.