The article is sparse on details, but the linked MIT news article goes into more depth. Of note, the algorithm was able to win 79% of the games it played. Without textual input, it only won 46%, and a more advanced machine learning algorithm without textual input only won 62%. Pretty cool.
Two things come to my mind:<p>1. That's gotta be one helluva manual<p>2. Reversing the procedure to automate documentation by examining variable and method names along various code paths would be brilliant.
Good thing we're teaching them Civilization - they'll never get out of the server room.<p>Teaching ultra-intelligent AI Monopoly, though? - guaranteed robot overlords.
Does that mean if the program read additional texts on Civilization strategy that it would get even better? How about texts that may be somewhat related but not specific to the game (combat strategy, world history?!...)
I'm somewhat familiar with this work. (My advisor talked to the authors some. I could be misrepresenting it a little, but not nearly as much as the article.)<p>It's not learning to play the whole game. It's learning to cheese (in gaming parlance) the opponent. The strategy it learns is to build a warrior as fast as possible and go and attack the enemy's city. If that fails, it almost always loses. The manual gave it some hint in that direction.