If you're wondering why this is interesting, games that AIs excel at like chess/checkers/go are all two-player, zero-sum, perfect-information (everyone knows everything), deterministic games, so you can exactly predict your opponents behavior by simulating "what would I do if I were them, and trying to make me lose". The only real hard problem in this space is extreme branching factors.<p>Everything gets vastly more complicated once you break any of those rules: non-zero sum games create a prisoner's dilemma cooperate/defect dynamic, every three or more player game is non-zero sum (and exponentially more for every player you add), hidden information forces you to manage how much you reveal to your opponent and requires you to simulate multiple "alternate futures" based on things you learn after making a decision, and randomness is equivalent to an extra player that makes irrational unpredictable moves.<p>Games like that are vastly closer to the messy real world than the computationally expensive but near-ideal world of games like go, and they're much more of an open problem.
they (basically) applied the ideas from a bot that plays poker to another game. it's interesting work, though perhaps not groundbreaking.<p>This idea of selfplay + counterfactual regret minimization does seem to be the superior way to solve game theoretic problems. Identifying valuable game theoretic problems remains a challenge...
They should tackle Starcraft II next like DeepMind has with AlphaStar, or at least a similar real time RTS like Starcraft with a fog of war and a partially observable state.