Follow-up to Corporate Trolls

in #news6 years ago

After I shared my last post about astroturfing and how corporations are trying to influence us everyday all over the internet I came across this article: The Terrifying Future of Fake News..


v-for-vendetta-2855609_640.jpg

Here are just a few highlights and thoughts from the article:

Fake Audio and Video via AI

I was already feeling like I all written text online could be suspect, but with the tools being developed with AI/machine learning and augmented reality it looks like audio and video can already be leveraged to wage an information war.

Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project. In the murky corners of the internet, people have begun using machine learning algorithms and open-source software to easily create pornographic videos that realistically superimpose the faces of celebrities — or anyone for that matter — on the adult actors’ bodies. At institutions like Stanford, technologists have built programs that that combine and mix recorded video footage with real-time face tracking to manipulate video. Similarly, at the University of Washington computer scientists successfully built a program capable of “turning audio clips into a realistic, lip-synced video of the person speaking those words.” As proof of concept, both the teams manipulated broadcast video to make world leaders appear to say things they never actually said.

The clips of Obama are not perfect, his brow is too static and something just seems a little off if you watch closely, but this is just the beginning. These tools will get to the point where it will be nearly impossible to tell a faked video from a real one.

Manipulation of the Mases

Another scenario, which Ovadya dubs “polity simulation,” is a dystopian combination of political botnets and astroturfing, where political movements are manipulated by fake grassroots campaigns. In Ovadya’s envisioning, increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference. Building upon previous iterations, where public discourse is manipulated, it may soon be possible to directly jam congressional switchboards with heartfelt, believable algorithmically-generated pleas.

This technology could also be used for "laser-fishing", or hyper focused spam. Spam that is so good that you can't tell it from real email, because it looks like it came from a friend.

Beset by a torrent of constant misinformation, people simply start to give up. Ovadya is quick to remind us that this is common in areas where information is poor and thus assumed to be incorrect. The big difference, Ovadya notes, is the adoption of apathy to a developed society like ours. The outcome, he fears, is not good. “People stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.”

Blockchain to the Rescue?

There are currently no solutions to this, but I imagine that blockchain technology could be useful. Blockchain is good for verifying and preventing fraud. Maybe in the next few years we will all be getting our news from the blockchain since it will be the only place we can verify the source.