50PH14: Slaughterbots & Fear-mongering vs. Useful Action

in #news7 years ago (edited)

article
contest

So . . . . the Elon Musk anti-AI hype cycle has started up again.

Elon Musk, Warren G, and Nate Dogg: It's Time to 'Regulate' A.I. Like Drugs

Worse, we have the Stuart Russell's movie, Slaughterbots.


screen-grab from Slaughterbots

Actually, Slaughterbots is exceptionally well-done and really should be mandatory watching for everyone.  If you haven't seen it, go watch it now -- it's less than eight minutes long, "entertaining" and scary AF.

The first problem is that it can be done today by most computer science graduates on a fairly low budget.  The only real problem is getting the shaped explosive -- and the internet provides all sorts of opportunities and work-arounds for that.


screen-grab from Slaughterbots

The second problem is "slaughterbots" are too effective and "clean" of a solution.  Land mines were outlawed for a variety of reasons beyond civilian casualties.  Cruise missiles and fully autonomous systems like the Israeli Harpy (used by nine countries and operational since 2008) have generated a lot of protest but are too useful for countries to give up -- and they are getting better and better.

But the biggest problem, and what I want to rant about, is the completely muddled framing of the problem as an "AI problem".  In reality, there are four very different and non-overlapping AI problems.  There is the weaponry that uses software developed as part of AI research but which is not itself truly "autonomous".  There is the fear of truly autonomous killer robots (aka Arnold Schwarzenegger).  There is the already existing problems of humans either intentionally using AI to harm others and sway elections or unintentionally causing problems due to bias and other "black box" shortcomings .  There is the rapidly increasing problem of AI replacing humans.

I have argued for years with Noel Sharkey and the International Committee for Robot Arms Control about their rhetorical tactics of conflating the current entirely pre-programmed (and still fairly stupid) weapons with future self-improving fully autonomous robots.  Stuart Russell is only backing into the autonomous weaponry debate because he is deathly afraid of future super-intelligent AI.  And indeed, most of what the average citizen gets through the news from Elon Musk, Nick Bostrom, the Machine Intelligence Research Institute (MIRI), the Future of Life Institute (FLI) and others is actually weaponized narrative to ensure that their fears are honored.

What we haven't seen is any effective collaborative action.  We've seen several "ethics" boards formed -- but membership has been strictly limited and there have been almost no published results.  MIRI and FLI have sunk a lot of money into one very specific line of research -- that precisely follows the errors that prevented AI from making progress for decades.  Instead of partisan fear-mongering and calls for regulation with absolutely no details, we need to divide the problem into rational pieces and start proposing rational solutions COLLABORATIVELY.


image provided courtesy of Metric Media

The first step is to separate the problems where humans are responsible (i.e. ALL the current problems) from the problems where machines are responsible (FUTURE).  We need to stop nonsensical fear-mongering proclamations like Elon Musk's claim that humanity has only a 5-10% chance of surviving artificial intelligence.  And we need to start investigating ALL avenues together.

Machines with at-least-human intelligence are coming whether we like it or not.  There are already many clear and present dangers from the limited AI that we already have.  Let's stop the screaming and get down to business.  The future of humanity is on the line.

=================

Let The Game BEGIN!

abbc://debfgh.deigj
abbc://kglimd.igngbjo
abbc://pefmlbjbe.qeb
abbc://jfbgrghgjoneqefjogqbeoogneqhegqh.hmd


Sort:  

Thanks, that was fun :).

htpmericdawsoglznf

This post has received gratitude of 27.53 % from @appreciator thanks to: @mark-waser.

Congratulations to @davidjkelley (with an assist by Marquiz Woods) for being the first one to break the code!

Whenever anyone else gets it, be sure to keep it because you'll need it later . . . . but please don't give it to anyone else.

The problem of robots in the future will be of great value. All will depend on where they will be used for war or for peaceful purposes. It all depends on the leaders of developed countries.
Let's hope that people realize the danger of using robots for military purposes, which can lead to big disasters at the exit of the robots from under the control of people.

This post was promoted with @monitorcap traffic bot & STEEM promotion service.

Send MIN. $1 SBD to @monitorcap bot with your link in MEMO field
and recieve upvotes & resteems for your posts. @monitorcap - where 'seen' matters !