I'm TA for an AI course at my university. They recently had to deliver and demonstrate their system beating 2048. Most people used min/max with alpha-beta pruning, and considered all possible moves and all possible placements of a 2 or 4 tile. This can make your bot a bit too cautious, so some used Expectimax instead, weighting each value with the probability of it happening.<p>Those who had simpler heuristics did better. Trying to combine 4-5 heuristics is hard, as you have to weight them against each other. The "gradients" mentioned here did alone produce good results for most students. Of the ~50 people, most managed to demonstrate to me that they could get a 2048 tile within a time limit. Some even 8k and 16k tiles.<p>I think most of them got the "Tetris-effect" by watching their bot play a few rounds, tweak, run it again etc. for a few days. Probably watched blocks sliding around when making food etc. :p
In my case it actually lost, but by watching it play I can tell it made very questionnable decisions about some moves, and it plays a lot less defensively than I usually do. It managed to get really far, only a couple of moves away from winning. It actually had assembled every piece for winning, only it failed to group them together to achieve 2048. Pretty incredible!<p>Also a very interesting read.
The brilliance of this post is not in the fact that a AI program can beat another AI program, but if a human conceivable algorithm of this length can beat the raw cognitive power of human users itself. I would be seriously diggin this.
I let the algorithm run to the end. 78992 points. Not only did I win I got a 4096 square (which is black btw) and another 2048 square. It died very close to getting an 8192 square.
Rly like that game too, did you try it with alpha beta pruning? should considerably speed up the "look into the future" thing compared to simple min/max