Friday, March 20, 2015

Google Artificial Intelligence

Google's artificial intelligence, DeepMind, has figured out how to play and master a handful of Atari video games. Partnered with Oxford University, Google developed an AI unit capable of learning and bettering itself over time. ”with its algorithm can not only learn how to play computer games from scratch - but go on to ace them after a few hours of practice.” This self taught AI not is not only capable of learning from past mistakes, it also has the capability to develop new tactics based on them.  At first, the algorithm struggles to return the ball but, after a few hundred plays, it eventually learns the best strategy to beat the game: break a tunnel into the side of the brick wall and then aim the ball behind the wall.” This is one step forward for science as it shows that we can create more delicate robots implemented with artificial intelligence, capable of distinguishing right from wrong and that are able to perform more delicate tasks.


http://www.zdnet.com/article/googles-deepmind-artificial-intelligence-aces-atari-gaming-challenge/

Written by: Jeff Almozar

No comments:

Post a Comment