As someone who was once semi-pro in dota (4400 MMR, get rekt), it's <i>freaky</i> watching these bots play. It's uncanny. Little things... Like, when the bots are taking a tower, one of them will stand in front of the tower and tank the creep wave, so that their creeps do more damage on the tower. They had to learn this.<p>Insta-TPing right when an enemy wastes their stun and can't cancel their TP.<p>Grouping up as 5 at the beginning of the game and pushing into the enemy jungle. <i>Pubs never do this.</i><p>The most interesting part is that OpenAI appears to be discovering new knowledge in the dota scene. For example, they always take the ranged barracks first, never the melee. This is exactly the opposite of what the pro scene does. Therefore, the smartest pro team should study what the bot is doing and trust that on average it's a better idea to always focus on the ranged barracks first. After all, if it was a bad idea, they probably wouldn't do that.<p>The most hilarious part was when OpenAI paused the game, then resumed it. This illustrates that there is still some unexplainable randomness.<p>Question for OpenAI: Is it more accurate to think of the bots as 5 separate minds, or a single mind controlling 5 heroes?<p>EDIT: By the way, TI is going on right now! <a href="https://www.twitch.tv/dota2ti" rel="nofollow">https://www.twitch.tv/dota2ti</a> If you're new to the scene, take a peek. TI is always so high energy -- even if it's hard to follow what's going on, listening to Tobi (the shoutcaster) go nuts during the game is always a highlight.<p>And of course, /r/dota2 has the best memes anywhere, hands-down. <a href="https://www.reddit.com/r/DotA2/" rel="nofollow">https://www.reddit.com/r/DotA2/</a>
From a presentation standpoint, I am impressed by and appreciate the effort in making the project process transparent and accessible, even to those without an AI background (in contrast to recent AI literature which tends to <i>obfuscate</i> the secret sauce).
The same 18 heroes? While impressive this is less of an improvement since the August 5th match even if they beat the pro team.<p>I thought they'd at least remove more of the rules (5 couriers, no illusions) or add some heroes.
I really want to see them play humans with no restrictions on the humans! I get it they're still in the learning phase but I want to see the gloves off
Is dealing with imperfect information a research goal?<p>Does the OpenAI team think there's a way to adapt the UX of DOTA 2 "Perfect Information Edition" to communicate the game better to human players?
I'm very excited about this. When I watch this new breed of AI play, I find it really interesting what they value and greatly enjoy speculating as to why in human terms.
I watched the Open AI play against the "team" of pros at the calibration match earlier this month. Couple of observations and takeaways.<p>The first is that the bot strategy currently revolves around the special rule of 5 invulnerable couriers. Bots find microing lots of units effortless, so the map constantly showed each bot's courier flying back and forth carrying regen. The bots never had to really go back to base or their shrines to heal. This is important because it changes the meta of the game entirely. The way the game is structured allows only one (very vulnerable) courier per team. Usually this means that after a team fight, teams need to reset since they've expended significant resources for the fight. But that meta was non existent under the rules for matches against the Open AI five. The humans had trouble coping with this as they weren't used to the idea of ferrying regen constantly.<p>Takeaways here - I could go on about the nuances of a single courier. But basically, the bots' gameplay will likely have to change once it comes down to 1 shared courier per team. Not sure how that will affect the architecture of a "no shared mind". Also, humans will likely need to take a page out of this gameplay and realise that couriers are a highly underutilized resource. Every second it's not doing something for no reason is just as bad as a hero not doing anything.<p>The second observation comes from the last game of AI vs pro humans. This was an interesting game where the audience picked a losing set of heroes for the team. Despite a predicted chance of winning being less then 2% (iirc) he AI could have probably won on account of being mechanically better than the humans. But their insistence on sticking to a strategy of "push hard" found them doing really strange things. The strangest of this was Slark running ahead to cut down creep waves in the lane on its own. The human players knew this would happen and they kept forcing the Slark to go hide in the trees and at some point they were always able to corner it and get the kill. Over and over again. The Slark never changed.<p>Similar things happened around the map during this game.<p>What should have happened was that the AI should have adapted to its disadvantage, and poured its efforts into first defending and then snowballing later with its mechanical advantage. But that element of "intelligence" was never there.<p>The takeaway is this. The AI will eventually beat the humans on account of them being always mechanically better. They need very slight changes in their strategy to win 99.9% of the time. They can be aggressive beyond any human possibility because they can calculate everything to perfection. How long it will take them to travel across the map vs how much longer it will take for an opposing hero to have its ultimate ready for example. There are a lot of mechanical components to Dota that the AI will always have an advantage over. But the AI will likely always reveal quirks that can be turned into dumb winning strategies (aka cheese strats). Something like the whole team fighting from the trees for example might just confuse the AI terribly. We don't know but every now and then someone will discover it and the teams working on the AI will have to "patch" the behaviour.<p>Final takeaway from all of that - I'm not sure if training the AI towards "objectives" is really the best metric towards making an intelligent bot. It seems like what's instead happening is that we get software that has no intelligence at adapting in the moment to things its never seen even if they are brain dead. But it'll get better at hiding them through mechanical perfections.<p>Upside - We get AI's capable of doing increasingly complex things in a seemingly perfect manner.<p>Downside - We get a scary future of AI filled with byzantine issues that need to be "patched".