Computer, explain to me what you are doing

in #science6 years ago

640x360.jpg

Every computer science student who deals with Artificial Intelligence (AI) knows the tank anecdote. Although the source situation is not clear, it has been narrated again and again for decades. During the Cold War, the US military conducted an experiment with an early form of a neural network. It reveals some basic problems in dealing with AI that has not been resolved to this day.

The military researchers wanted to teach the AI software to detect tanks. For this, they photographed a forest from the air - once in the natural state, then with hidden camouflage tanks under the trees. They fed the program with the photos, and after a short while they unerringly identified the footage of military vehicles. At the time it was celebrated as an amazing success, but the enthusiasm was short. When the Pentagon tested the software with other tank shots a little later, the AI landed only coincidental hits. Cause of the failure: The empty forest had been photographed in the sunshine, the tank photos in cloudy skies.

What can be learned from this today? Still, decisions of AI systems are directly dependent on what data they are fed with. Computing power, complexity, and size of the data sets have grown many times in the past decades. Accordingly, such photo sorts today work (mostly) without errors. But what happens in more complex tasks that people can not judge as easily as the brightness of a photograph? Who understands the decisions of a program that calculates the probability of relapse of a criminal due to a variety of factors?

What does "transparency" mean for software decisions?

Scientists and lawyers are currently carrying on this debate with great passion: is it possible for AI systems to commit to "transparency", ie to make their judgments comprehensible to people? Decisions in many areas of life are already automated or will be made by computers in the foreseeable future: algorithms calculate insurance premiums, sort applications or make city planning decisions. Therefore, these are not just theoretical questions among scientists.

"Explanation is the biggest challenge we face in AI," said Andrew Burt, a researcher for data analysis firm Immuta and Yale University. The problem is already evident in conventional algorithms used in the US justice system. They learn from previous cases and the prejudices of the judges. As a result, they predict a higher probability of relapse for African Americans than they did in reality (in the case of whites it was the other way round).

The decision logic of a neural network that has accelerated AI development is much more complex. It arises from the interplay of thousands of artificial neurons, which in turn are arranged in dozens or even hundreds of interconnected levels and shift - not for nothing is the system modeled on interconnections of the human brain.