You are viewing a single comment's thread from:

RE: Big Tech's March to a Dystopian Future

in #surveillance6 years ago

The false assumption everybody makes about AI is that is heading towards omnipotence. It can not, it will not. AI has no self awareness, it has no self purpose, it is we who guide it. AI is only as good as the initial framework it is built upon and the learning data it is fed. Just like us, (because it is built by us), it has blind spots.

https://hackernoon.com/dogs-wolves-data-science-and-why-machines-must-learn-like-humans-do-41c43bc7f982

Awareness is limited by the senses, the data it receives and the framework used to process, organise and then act upon that data.

Yes, AI does pose a huge potential threat to us all, but we are being sold a lie that it will be perfect. And therein lies the greatest danger, when AI connected to our networks makes mistakes and has the power to change things.

Weaponised AI against weaponised AI might be pointless as both "machines" will be stuck in a logical stalemate.

Yes, humans can be predictable, but not completely because of our imperfection. Perhaps this is what may save us in the end against AI.

Sort:  

as i read, the problem is the current hardware to allow those IA to growth.

The growth of AI is based upon our models of cognition, which are incomplete. So until an accurate model of cognition exists, the supporting hardware limits the implementation. One could argue that our own biological hardware is limits our cognition. Since we are the architects of AI, this then creates a feedback loop of limitations.

At some point it could be that they have non-linear thinking that allows for imagination and unpredictable thinking. It's a long way from there, but that's the goal of many. I hope it's never done.

It is a vainglorious attempt to replicate or improve upon something that we don't understand, our own consciousness. Thus by default, what they build will be flawed.