Google’s AI DeepMind for the first time starts to think like a human
In 2016, for the first time, an artificial intelligence called AlphaGo beat the ranking human champion in a game of Go. This victory was both unprecedented and unexpected, given the immense complexity of the Chinese board game. AlphaGo's victory was impressive and since then this artificial intelligence has beaten a number of other Go champions. However, it is still a type of artificial intelligence that can only outperform a human in a very limited domain of tasks.
So even though it might be able to kick your ass at one of the most complicated board games in existence, you wouldn't exactly want to depend on AlphaGo for even the most mundane daily tasks, like making you a cup of tea or scheduling a tuneup for your car … yet.
This will change soon.
According to two new papers released in June 2017, Google’s DeepMind researchers at this secretive Alphabet subsidiary are now laying the groundwork for a general artificial intelligence using two new groundbreaking approaches for AI learning.
The results were incredibly promising: in some areas, the AI even managed to surpass human abilities. This work is a tremendous step toward a general AI. Certainly, there are still things to be done before artificial intelligences will be able to take over the world.
But this – for sure – is just a matter of time.
For further information:
https://arxiv.org/pdf/1706.01433.pdf
Welcome to Steemit @meisi51 :)
Make sure to participate in this weeks giveaway to get known in the community!
Here are some helpful tips to get you started:
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://motherboard.vice.com/en_us/article/google-deepmind-artificial-intelligence-neural-net-discovery
Yeah thanks - that is what I already cited in my article ... ;)