A new test could tell us whether an AI has common sense

in #tech7 years ago

Getty


Virtual assistants and chatbots don't have a lot of common sense. It's because these types of machine learning rely on specific situations they have encountered before, rather than using broader knowledge to answer a question. However, researchers at the Allen Institute for AI (Ai2) have devised a new test, the Arc Reasoning Challenge (ARC) that can test an artificial intelligence on its understanding of the way our world operates.

Humans use common sense to fill in the gaps of any question they are posed, delivering answers within an understood but non-explicit context. Peter Clark, the lead researcher on ARC, explained in a statement, "Machines do not have this common sense, and thus only see what is explicitly written, and miss the many implications and assumptions that underlie a piece of text."

The test asks basic multiple-choice questions that draw from general knowledge. For example, one ARC question is: "Which item below is not made from a material grown in nature?" The possible answers are a cotton shirt, a wooden chair, a plastic spoon and a grass basket.

If machine learning can successfully pass the Arc Reasoning Challenge, it would mean that the system has a grasp of the common sense that no AI currently possesses. It would be a huge step forward, leading to smarter artificial intelligence, and move these systems closer to one day taking over the world.


Source: MIT Technology Review