Artificial Intelligence can overcome men sooner than I, you and the experts imagined.

in #technology7 years ago

Do You Know GO?

jogo_go.jpg

Go - also know as Weiqi or Baduk - is quite simply, the most difficult game in the world under certain light. You, my chess friend, must now be gathering the stones to shoot me, but calm down there.

It is true that at a basic stage chess is a more complex game at the strategy level. It is not enough to move the pieces, it is necessary to have a plan of action and reaction. Meanwhile, in Go it is possible to just go putting the stones in a random way and get a certain success.

But when we put the best of the world to play, Go becomes an extremely complex game, precisely because it has tens of millions of possibilities for more moves than chess. In numbers, in chess there are 10^120 possibilities of moves, while in Go there are 10^768. Therefore, it is often said that a game from Go was never repeated in history. That's because the game has existed for more than 2,500 years.

Basically, Go consists of a board (goban) of 19 by 19 crossed lines and 'infinite' black and white pieces identical to each other. The object of the game is to lay the stones on the board alternately until you gain as much territory as possible and end up with your opponent's play possibilities. Go is therefore a game where you have to dominate territories surrounding your opponent.

Do You Know Lee Sedol?

lee_sedol.jpg

Lee Sedol is a 33-years-old south korean professional Go player since its 12yo amd world champion 18 times since. He is considered the best Go player in the world and was responsible for an unpleasant mission: to face a Google supercomputer.

Accustomed to dominating his opponents, Sedol was invited to a duel against AlphaGo: software developed by Google's DeepMind artificial intelligence program. Just to stimulate the mood, the winner would win a prize of $ 1 million.

The clash was fought in a better five matches, in the way of what is commonly done in the modality and soon gained a lot of visibility (with real-time transmission and wide coverage in the country). At the same time, it became impossible not to make comparisons with the historical chess match between IBM's Deep Blue software and Garry Kasparov, the former world chess champion, on May 11, 1997, when the computer overcame the Man in the sport for the first time - that is to say that Kasparov won the computer a year earlier.

How Was The Confrontation?

In spite of the wide repercussion, the confrontation did not happen without a history and Lee Sedol was not the first nor the only player of Go surpassed by the software. Just before facing the world champion, the Google machine defeated with authority the three-time European champion of Go, Fan Hui, by five to zero.

With this beautiful retrospective, the developers were excited and summoned the confrontation against the best in the world and did not regret. In a backlog of the tech world, Google outperformed other companies - such as Facebook itself - who also tried the feat (maybe, just maybe, that's something important to them).

According to some players and experts who followed the matches, the first two clashes were the most difficult for the human. With a game strategy different from what he was accustomed to, Sedol sought to find loopholes in the computer instead of 'proposing the game'. Thus, AlphaGo soon opened 2 to 0 in the dispute.

When the confrontation arrived in the third match, the South Korean decided to adopt a different strategy and this, in turn, gave much more work to the computer. Sedol used some innovative tactics and created a kind of all-or-nothing, something that had not yet done in the duel. But the computer was prepared for that too and defeated the player by resignation after 176 moves. At that moment, the human had already exceeded the maximum limit of two hours and invaded the overtime, while the computer still had 8 and a half minutes remaining on the clock. This also gave him speed in decision making.

Despite their 3-0 win, it is customary for the clashes to continue until the last match and so we had the fourth and fifth duels.

And it was just in the next match that came the surprise. After defeating the European champion by 5 to 0 and opening 3 to 0 against the world champion, finally the machine lost a match for a human.

In a duel that lasted more than five hours, Sedol punished the computer to the end. After a flawed move made in play 79, the software only recognized the error in play 87, and by then the South Korean had already gained enough advantage to defend until the end.

The unprecedented victory encouraged those who accompanied the duel. At that moment, it was believed to have set a precedent. Commentators pointed out that Sedol had finally understood how the computer behaved and had managed to use it against it. Soon a great expectation was created for the fifth game. A new victory could mean that the human had actually discovered a flaw in software programming.

And in fact the fifth and final duel was the most balanced. Both competitors raided the overtime and the clash was only ended after 280 moves. Once again, those who accompanied the match could not point to an advantage for either side. The victory seemed to pass from hand to hand with every move. But it was then that something difficult to understand happened.

In play 48, the computer made a movement very similar to the error of the fourth match, but this time it was an unprecedented move. Without understanding this logic, all the experts pointed this out as more of a computer error, which confirmed the thesis that Sedol had discovered a breach in programming. However, what was seen was a sequence of movements right in the middle of the board that made the game long and complicated, but after almost five hours pointed to the resignation defeat of Sedol giving final numbers to the score: 4 to 1 .

With the victory of AlphaGo, Google has decided to donate $ 1 million to Unicef, institutions that promote STEM (science, technology, engineering and math) education and Go game organizations. A prize of $ 150,000 for the stake earned an additional $ 20,000 for each win (only one).

What Does All This Tell Us About Artificial Intelligence?

Researchers have been trying to teach Go to computers for over twenty years. One more investment of this size that brings together some of the world's most talented and intelligent people is not just for Go winners. Google has much more ambitious plans.

Before going into the merits of the creation and the goals of the software it is worth remembering that although Go is an extremely complicated game of teaching yourself to a computer - for reasons you may already imagine, but that we will see more in depth below - there are still games In which computers are not able to outperform humans, such as Poker.

That's because, despite the many possibilities of modalities like Chess and Go, they are what we call zero-sum games with total information. That is, the gain of one player necessarily means the loss of another and all the information that needs to be processed (however many) are on the board.

But this does not detract from the achievement achieved by software, especially when we consider that experts estimated that such an achievement would only be achieved by Artificial Intelligence in 2025 (!).

And the predictions were stuck a lot because they were based on a possible successful method similar to that used by IBM in 1997. When Deep Blue defeated Kasparov even before the turn of the millennium, the model used was that of "brute force." The computer in question was capable of calculating 200 million moves per second and was powered by a database with hundreds of thousands of possibilities from the most classic to the great creations of the world's leading chess masters at the time.

Making a simple comparison to the amount of possibilities of more plays that the Go presents, nobody could imagine that the success would come so soon. But it happened because instead of trying to teach the computer all (or at least almost all) the possibilities that exist in the game, the developers made it possible for the computer itself to learn alone. Therefore, they abandoned the "brute force" model and set out for the "neural network" model.

Google DeepMind scientists have developed two networks. The first, called a "value network," evaluates the positions on the board. The second, "policy network", decides the movements.

Both networks were fueled by an unprecedented combination of deep learning where millions of human expert moves were introduced into the database. After that first step, the neural networks were put into practice and the computer made thousands of matches against itself so that it could apply the data that it received and - here goes the great secret - to learn alone.

The method was also directed to the computer thinking of strategies that did not aim for an immediate gain, but the best long-term result. That is to say, in receiving a certain imput, the computer does not think of a way to neutralize it immediately, but in a sequence of moves capable of bringing it closer to victory. Thus, the efficiency of each single move may not be as good, but the end result is.

That way, not having to calculate every possible move with each move, AlphaGo does a much smaller and faster processing than Deep Blue. The new software analyzes only the most favorable combinations in the medium term, and not all, as was done before.

Therefore, the computer developed by Google thinks less as a machine and more like a human who really looks at the real possibilities, not hypothetical. And it is this fact added to the fact that the computer can make deductions about new data that signals a bright future for Artificial Intelligence.

Google itself acknowledges that a victory in a Go match, even if it is against the world champion, is not as relevant. Even more so compared to the chess duel of 20 years ago. However, what animates the developers is precisely the possibility of applying this model in other areas.

The company issued a statement on its official blog saying its engineers are encouraged to apply these methods to "complex and urgent problems in society, such as the study of climate and the analysis of diseases." Now we have to wait for the future before us and see what will happen.

Sort:  

Oh fun.. I can safely say I was not very excited for the day that AI really starts coming into play. Did the world not pay attention when watching Terminator?? :O

Nice post.......
i upvoted your post plz upvote me back!!

nice post...i like

Absolutely on point

i think its a bit scary that AI is evolving that fast ;-)

This post has received a 1.04 % upvote from @drotto thanks to: @banjo.