Pretty much agree with everything in here. As I said in my earlier posting (and this blog post reiterates), a 1v1 Shadowfiend mid is highly technical and does not require a huge search space (like in Go or Chess) or any judgment; all it takes is a few tactics (e.g. creep blocking) and good aim for the razes.<p>Also, the bot was already beaten 50+ times[1]. There are at least 3 strategies that work. It just goes to show how primitive AI is, as it took the AI team thousands of generations to get it to this stage, but a few determined gamers outsmarted it (using a few cheap meta-strategies) it in less than 6 hours after release.<p>[1] <a href="https://www.reddit.com/r/DotA2/comments/6t8qvs/openai_bots_were_defeated_atleast_50_times/" rel="nofollow">https://www.reddit.com/r/DotA2/comments/6t8qvs/openai_bots_w...</a>
<i>We did not make sudden progress in AI because our algorithms are so smart – it worked because our researchers are smart about setting up the problem in just the right way to work around the limitations of current techniques.</i><p>This statement is like putting wheels and a motor at the base of the goalposts.<p>Everyone who practices ML knows the reality that while we're not going to see AGI for a while, and these systems are massively hard to build and do very narrow bounded things, they are also making massive progress in "intelligent" outputs at a pace we've never seen.<p>Yes, there is hype, but there are pretty solid reasons to be hyped.<p>We'll keep seeing people saying oh well it's not <i>that impressive</i> probably until AGI has clearly taken everyone's job in 2100 and we're all just providing training data for it.
there was a "discussion" on nadota.com about the bot and the semipro player that openai used to test chimed in.<p>apparently the set of items the bot chose to purchase from was limited[1] and recommended by the semipro tester. As someone who knows next to nothing about ai, my question is this: the bot was announced on stage as blank slate, dumped into dota, and built entirely from grinding countless games against itself; is it reasonable to pitch it this way while having this item constraint from an outside source? I also wonder what else was recommended by the tester, and then constrained.<p>the "discussion" is linked below and the tester is the user sammyboy. Here's a warning though: nadota is 99% trolling, hate, idiocy, and garbage.<p>[1] <a href="http://nadota.com/showthread.php?41718-terrifying-1v1-mid-AI-dumpsters-suma1l-arteezy-with-ease&p=1807415&viewfull=1#post1807415" rel="nofollow">http://nadota.com/showthread.php?41718-terrifying-1v1-mid-AI...</a>
> Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.<p>I hate that people actually see things this way. Regulation to prevent AIs from taking over the world will never happen, because nation states won't cooperate on such rules [1]. Additionally you can't catch people using AIs to determine their actions.<p>BUT what regulation can do is prevent people from competing with a few of Larry Page's and Elon Musk's businesses.<p>[1] <a href="https://www.rt.com/news/395375-kalashnikov-automated-neural-network-gun/" rel="nofollow">https://www.rt.com/news/395375-kalashnikov-automated-neural-...</a>
Is no one upset that they claimed `learned from self play only` when clearly it isn't? Creep blocking ? really? it learned a left over feature in original DOTA from warcraft in a new stand alone DOTA 2 client strictly on self play? And that distinct animation canceling the bot does? Look at amateur players and pro players (pro players do it more often then actually needed to `warm up` the muscles, similar to extra key strokes in SC). I wouldn't be surprised if the bot was trained in tons of pro openings. :/<p>Why not be clear about what has been done? Deepmind has said they do supervised learning first and other stuff on top of that. My guess is something similar to that happened.
Surely the author knows that neither Chess nor Go have been "solved". Qoutes or no qoutes, it's still very inaccurate.<p>I'd also argue that chess and go are both vastly more difficult problem sets. We literally do not have the computational power to solve a game of chess and it's projected that we won't for another 50-100 years.
While the author is probably right and this is no huge breakthrough in AI/ML, it is yet another example of AI/ML being able to do an activity that surpasses a human's ability. I am still waiting for an example of how AI/ML will complement a human's life as opposed to demonstrating an area where a human can be replaced.
There is top hype in ai but also in neuroscience.
Actually there aren't scientific evidences that mind and coscience are materiale and born from the brain.
Also the emotions are really important in the logic and thinking process.
So without coscience and enotions we can't have real think on a machine.
I hope someone can clarify.<p>What are the definitions of AI and game complexity in this field?<p>These all sound like very exiting developments. As I read about them a lot of times games such as Dota and Starcraft are touted as more complex than Chess or Go, but--at least with Starcraft, the AIs are limited in their number of actions to level the playing field. Isn't that like claiming humans can run faster than greyhounds, provided that the greyhounds only get to use two legs? Or maybe claiming that humans are better at chess when computers are restricted to the maximum human ply depth?<p>I also noticed a claim--again, in a Starcraft related article--that the AIs previously couldn't beat the build-in AIs (the computer players). What type of AIs are considered as challengers here? Only blank-slate self learning AIs?
A lot of dota's mechanics is designed with the assumption that the player is human -- e.g. skills that can be programmed to be released perfectly but are hard for a human (even pro) to do so reliably (Shadow Fiend's raze is one of them).
I see Elon Musk tweets as a warning about the <i>potential</i> of AI, not a hype of AI nor the current stage of AI.<p>The most impressive part to me is that the bots are <i>self-learned</i>. On the other hand, AlphaGo is supervisored. They are different (not to say which one is better).
Big assumptions were made by the author of this post, the biggest being that they used an API to get access to game data rather than pixels. If the AI were limited to pixels then the achievement is much greater.
The bot did not have their creep blocks hard coded. In the event the player purposefully messed up his block to see if the bot would respond and it did
Where do bots go to fight other bots in millions of games, and algorithms compete for superiority? I assume there must be an ongoing "marketplace" to match bots and run the simulations.