I was at the 2003 match of Garry Kasparov vs Deep Junior -- the strongest chess player of all time vs what was at that point the strongest chess playing computer in history. Kasparov drew that match, but it was clear it was the last stand of homo sapiens in the man vs machine chess battle. Back then, people took solace in the game of Go. Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.<p>Tonight, that happened. Google's DeepMind AlphaGo defeated the world Go champion Lee Sedol. An amazing testament to humanity's ability to continuously innovate at a continuously surprising pace. It's important to remember, this isn't really man vs machine, as we humans programmed the algorithms and built the computers they run on. It's really all just circuitous man vs man.<p>Excited for the next "impossible" things we'll see in our lifetimes.
This is my generation's Gary Kasparov vs. Deep Blue. In many ways, it is more significant.<p>Several top commentators were saying how AlphaGo has improved noticeably since October. AlphaGo's victory tonight marks the moment that go is no longer a human dominated contest.<p>It was a very exciting game, incredible level of play. I really enjoyed watching it live with the expert commentary. I recommend the AGA youtube channel for those who know how to play. They had a 9p commenting at a higher level than the deepmind channel (which seemed geared towards those who aren't as familiar).
I was really hoping to see a more technical discussion than what I found here in the comments. It's too bad that such a cool accomplishment gets reduced to arguments about the implications for an AI apocalypse and "moving the goalposts". This isn't strong AI, and it was at least believed to be possible (albeit incredibly difficult), but it is still a remarkable achievement.<p>To my mind, this is a really significant achievement not because a computer was able to beat a person at Go, but because the DeepMind team was able to show that deep learning could be used successfully on a complex task that requires more than an effective feature detector, and that it could be done without having all of the training data in advance. Learning how to search the board as part of the training is brilliant.<p>The next step is extending the technique to domains that are not easily searchable (fortunately for DeepMind, Google might know a thing or two about that), and to extend it to problems where the domain of optimal solutions is less continuous.
I posted in the earlier thread because this one wasn't up yet[1].<p>Some quick observations<p>1. AlphaGo underwent a substantial amount of improvement since October, apparently. The idea that it could go from mid-level professional to world class in a matter of months is kinda shocking. Once you find an approach that works, progress is fairly rapid.<p>2. I don't play Go, and so it was perhaps unsurprising that I didn't really appreciate the intricacies of the match, but even being familiar with deep reinforcement learning didn't help either.
You can write a program that will crush humans at chess with tree-search + position evaluation in a weekend, and maybe build some intuition for how your agent "thinks" from that, plus maybe playing a few games.
Can you get that same level of insight into how AlphaGo makes its decisions?
Even evaluating the forward prop of the value network for a single move is likely to require a substantial amount of time if you did it by hand.<p>3. These sorts of results are amazing, but expect more of the same, more often, over the coming years. More people are getting into machine learning, better algorithms are being developed, and now that "deep learning research" constitutes a market segment for GPU manufacturers, the complexity of the networks we can implement and the datasets we can tackle will expand significantly.<p>4. It's still early in the series, but I can imagine it's an amazing feeling for David Silver of DeepMind.
I read Hamid Maei's thesis from 2009 a while back, and some of the results presented mentioned Silver's implementation of the algorithms for use in Go[2].
Seven years between trying some things and seeing how well they work and beating one of the best human Go players. Surreal stuff.<p>---<p>1. <a href="https://news.ycombinator.com/reply?id=11251526&goto=item%3Fid%3D11250748" rel="nofollow">https://news.ycombinator.com/reply?id=11251526&goto=item%3Fi...</a><p>2. <a href="https://webdocs.cs.ualberta.ca/~sutton/papers/maei-thesis-2011.pdf" rel="nofollow">https://webdocs.cs.ualberta.ca/~sutton/papers/maei-thesis-20...</a> (pages 49-51 or so)<p>3. Since I'm linking papers, why not peruse the one in Nature that describes AlphaGo? <a href="http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html" rel="nofollow">http://www.nature.com/nature/journal/v529/n7587/full/nature1...</a>
What an incredible moment - I'm so happy to have experienced this live. As noted in the Nature paper, the most incredible thing about this is that the AI was not built specifically to play Go as Deep Blue was. Vast quantities of labelled Go data were provided, but the architecture was very general and could be applied to other tasks. I absolutely cannot wait to see advancements in practical, applied AI that come from this research.
I just wrote a blogg about this. I was up to 1am this morning watching the game live. I became interested in AI in the 1970s and the game of Go was considered to be a benchmark for AI systems. I wrote a commercial Go playing program for the Apple II that did not play a very good game by human standards but did play legally and understood some common patterns. At about the same time I was fortunate enough to get to play both the woman's world Go champion and the national champion of South Korea in exhibition games.<p>I am a Go enthusiast!<p>The game played last night was a real fight in three areas of the board and in Go local fights affect the global position. AlphaGo played really well and world champion (sort of) Lee Sedol resigned near the end of the game.<p>I used to work with Shane Legg, a cofounder off DeepMind. Congratulations to everyone involved.
I watched the commentary that Michael Redmond gave (9-dan-professional) and he didn't point out one obvious mistake that Lee Sedol made the entire match. Just really high quality play by AlphaGo.<p>Really amazing moment to see Lee Sedol resign by putting one of his opponent's stones on the board.
I was really expecting Lee Sedol to win here.
I'm very excited, and congratulations to the DeepMind team, but I'm a bit sad about the result, as a go player and as a human.
"AlphaGos Elo when it beat Fan Hui was 3140 using 1202 CPUs and 176 GPUs. Lee Sedol has an equivalent Elo to 3515 on the same scale (Elos on different scales aren't directly comparable). For each doubling of computer resources AlphaGo gains about 60 points of Elo."
Terrific accomplishment.<p>Just a question to throw out there - does anyone feel like statements like this one "But the game [go] is far more complex than chess, and playing it requires a high level of feeling and intuition about an opponent’s next moves."<p>… seem to show a lack of understanding of both go and chess?<p>I understand there may be some cross-sports trash talking, but chess, played at a high level <i>by humans</i>, relies on these things as well. The more structured nature of chess means that it is (or at least was) more amenable to analysis by brute force computer algorithm, but no human evaluates and scores hundreds of millions of positions while playing chess or go.<p>Eh, the mainstream media is going to say this regardless, and I suppose it's just unrealistic to expect them to draw a distinction between <i>complex for humans</i> and <i>amenable to brute force computation</i> but statements like this always seemed to show a remarkable lack of awareness of how people actually play these games (though I am not an especially skilled chess or go player).
The funny thing about AI at this scale is we don't really know why the computer does what it does. It's more of a inductive extrapolation that we can verify that a technique works for a small problem, so we'll throw a whole bunch of GPU power and data at it and it SHOULD work for a big problem. How it actually works is fuzzy though as there's just a couple of gigabytes of floats representing weights in neural networks. No human can look at that and say: "Oh! I see why it made that move". It's so much data that it becomes kind of nebulous what the AI is doing.
After Go, the next AI challenge they're looking at is Starcraft: <a href="https://twitter.com/deeplearning4j/status/706541229543071745" rel="nofollow">https://twitter.com/deeplearning4j/status/706541229543071745</a>
We still have Arimaa. It's designed specifically to make it difficult for computers to play.<p><a href="http://arimaa.com/arimaa/" rel="nofollow">http://arimaa.com/arimaa/</a>
A human was beaten with some thousands of CPUS & GPUS. On a calorie level, the human is still more efficient.<p>On a time to learn these skills... going from zero (computer rolls off assembly line) to mastery, the computer wins.<p>Actually maybe the computer wins even on the caloric level, if you consider all the energy that was required to get the human to that point (and all the humans that didn't get to that point, but tried).
Beating humans in Go is, in itself, not all that exciting. <i>Go bots have been beating strong humans for quite some time now</i> (just not the very top humans).<p>There are other implications that make this AlphaGo progress super exciting though. Go captures strategic elements that go well beyond the microcosm of one nerdy board game.<p>That's the real reason Go has been around for >2,000 years, and why this AI progress is relevant, despite its limited "game domain".<p>I wrote about it here, from my perspective of an avid Go player & machine learning professional [1].<p>[1] <a href="http://rare-technologies.com/go_games_life/" rel="nofollow">http://rare-technologies.com/go_games_life/</a>
Can someone explain why this is more impressive than a computer beating top chess players over a decade ago? I'm not very familiar with Go, and while there were far more squares on a Go board, it seems less sophisticated than chess to me.<p>Maybe Go has way more moves possible and emergent strategies or something I'm not taking into account.
I'm truly amazed also, I'm not surprised or shocked. Once I knew that the previous master was beaten, I knew it's just a matter of time to see the #1 player topped.<p>What would be shocking is to find out that a famous writer, musician or scientist is in fact, just an alias for an advanced AI system :) It needs a little trick, because people should be tricked into believing that there's a real person behind the name.<p>Oh wait, I just remembered that there's a (mediocre) movie made on the subject: S1m0ne ( <a href="http://www.imdb.com/title/tt0258153/" rel="nofollow">http://www.imdb.com/title/tt0258153/</a> )<p>Are you saying it won't happen? Think of the guys saying the same of go :)
What this actually means is that "the approach" AlphaGo team developed to "computationally" play Go, which is an computationally intractable problem, will be very useful in other computationally intractable problems. The media is going to get crazy without understanding what actually happened.
If you are going very hysteric over this and thinking that robots are going to take over then please try this:-
Before the start of the game add/remove/update any rules of the game and tell both the players - the human and computer - at the start of the game about new rules and lets see who wins.
This not only shows the insane advances in computer AI, but an incredible advancement between the Fan Hui games and this one. Im still going through the kifu to get a sense of how could it have improved so much in only 6 months.
I want to scratch my itch and play some go. I suck, and playing against other players online I get destroyed so quickly I feel like I'm ruining their fun. Where can I find a fun bot with variable difficulty?
Extremely interesting news and kind of sad as a human being :)<p>I don't really know that much about AI, but hopefully some experts can tell me - how different are the networks that play go vs chess for example? Or recognise images vs play go?<p>What I mean is - if you train a network to play go and recognise images at the same time, will the current techniques of reinforcement learning/deep learning work or are the techniques not sufficient at the moment?<p>If that works, then it really does seem like a big step towards AGI.
I had a feeling that AlphaGo would beat Lee Sedol yesterday after watching Fan Hui's interview [1].<p>According to Hui's recall, the defeat all came down to these things: the state of the mind, confidence and human error. The gaming psychology is a big part of the game, without the feelings of fear of being defeated and almost never making mistakes like humans do, machine intelligence beating human at the highest level of competitive sports/games is inevitable. However, to truly master to game of Go, which in ancient Chinese society, it's more of an philosophy or art form than a competitive sport, there is still a long way to go.<p>There were a ton of details Hui cannot speak of due to the non-disclosure agreement he signed with DeepMind, but those were the gist of the interview.<p>In the end, AlphaGo match is 'a win for humanity', as Eric Schmidt put it. [2]<p>[1] <a href="http://synchuman.baijia.baidu.com/article/344562" rel="nofollow">http://synchuman.baijia.baidu.com/article/344562</a> (In Chinese)<p>Google Translate: <a href="https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&u=http%3A%2F%2Fsynchuman.baijia.baidu.com%2Farticle%2F344562" rel="nofollow">https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&...</a><p>[2] <a href="http://www.zdnet.com/article/alphago-match-a-win-for-humanity-eric-schmidt/" rel="nofollow">http://www.zdnet.com/article/alphago-match-a-win-for-humanit...</a>
reference: SGF file on OGS: <a href="https://online-go.com/demo/114161" rel="nofollow">https://online-go.com/demo/114161</a><p>To my untrained eye, AlphaGo was already way ahead by move 29 in the match tonight with black having a weak group in the upper side, while black wasted a lot of moves on the right side as white kept pushing (Q13, Q12), which white erased later because those pushes were 4th line for black and the area was too big too control. Black never had a chance to recover this bad fight. After those reductions and invasion on right side white came back to the 3-3 at C17 which feels like solidified the win.<p>Some people are asking what was the losing move for Lee Sedol? I wanted to joke and say "the first one.." but maybe R8 was too conservative being away from the urgent upper side where white started all the damage.
No surprise at all, human brain is an organ with limited neurons, and computer doubles its performance very 18 months. In fact not just the chess, I would say that AI will beat human all around at unlimited ratio in the future, when they learned how to improve themselves especially.
I was just thinking, does AlphaGo's game strategy also emulate some sort of psychological strategies used by real human, such as bullying, confusing or making fun of its opponent when it sees fit.
What do you guys think of the future progress on the game Go? Will our only chance against AI be to team up with an AI to beat the lone AI? Like in this article about centaur chess players: <a href="http://www.wired.co.uk/magazine/archive/2014/12/features/brain-power/page/2" rel="nofollow">http://www.wired.co.uk/magazine/archive/2014/12/features/bra...</a> (2014) It all sounds very Gundam Wing to me.
<i>Deep Blue: </i><p>Massive search +<p>Hand-coded search heuristics +<p>Hand-coded board position evaluation heuristics [1]<p><i>AlphaGo: </i><p>Search via simulations (Monte Carlo Tree Search) +<p>Learned search heuristics (policy networks) +<p>Learned patterns (value networks) [2]<p>Human strongholds seem to be our ability to learn search heuristics and complex patterns. We can perform some simulations but not nearly as extensively as what machines are capable of.<p>The reason Kasparov could hold himself against Deep Blue 200,000,000-per-second search performance during their first match was probably due to his much superior search heuristics to drastically focus on better paths and better evaluation of complex positions. The patterns in chess, however, may not be complex enough that better evaluation function gives very much benefits. More importantly, its branching factor after using heuristics is low enough such that massive search will yield substantial advantage.<p>In Go, patterns are much more complex than chess with many simultaneous battlegrounds that can potentially be connected. Go’s Branching factor is also multiple-times higher than Chess’, rendering massive search without good guidance powerless. These in turn raise the value of learned patterns. Google stated that its learned policy networks is so strong “that raw neural networks (immediately, without any tree search at all) can defeat state-of-the-art Go programs that build enormous search trees”. This is equivalent to Kasparov using learned patterns to hold himself against massive search in Deep Blue (in their first match) and a key reason Go professionals can still beat other Go programs.<p>AlphaGo demonstrates that combining algorithms that mimic human abilities with powerful machines can surpass expert humans in very complex tasks.<p>The big questions we should strive to answer before it is too late are:<p>1) What trump cards humans still hold against computer algorithms and massively parallel machines?<p>2) What to do when a few more breakthroughs have enabled machines to surpass us in all relevant tasks?<p>Note: It is not entirely clear from the IBM article that the search heuristics is hand-coded, but it seems likely from the prevalent AI technique at the time.<p>[1] <a href="https://www.research.ibm.com/deepblue/meet/html/d.3.2.html" rel="nofollow">https://www.research.ibm.com/deepblue/meet/html/d.3.2.html</a>
[2]
<a href="http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html" rel="nofollow">http://googleresearch.blogspot.com/2016/01/alphago-mastering...</a>
AI is good for rules based systems, but most of the worlds problems that need to be solved don't have rules in the same way a board game does.
Sure it's cool that a computer beat a human at a board game, but thats like celebrating a penguin being better at fishing than a person with bare hand
It's almost kind of bad timing in the U.S., what with one of the most insane primary seasons in our history -- this will probably not make the news at all let alone the front page like Kasparov's and Magnus's games did.
Learning from experience goes both to the program and to the champion. Does this mean if the champion keeps playing with the machine several times, he has a chance of winning?
It'll be interesting to see what new things we learn about Go itself from DeepMind. The game is very deep, and apparently we haven't found the bottom yet!
I think it will be very interesting if Lee Sedol can win one. Humans have different blueprints and environments. Who is to say a human can't become better?
AlphaGo can be beaten. It uses reinforcement learning so it will perform the set of moves that in the past led to its win. So predictable. Sedol just needs to take control and make it play in a predictable fashion. Also, perhaps play obscure moves that AlphaGo wouldn't have trained on. Perhaps next year's Go winner will have a PhD in computer science.
i would like to see the same match, but switched placed. alphago plays itself, this time as black, to kind of see the choices it would make, and if they would align with lee's.
The thing that was supposed to take at least 10 years happened. Only last month people were still saying that no way AlphaGo will beat the champion and that it will be crushed. Today everybody will have seen it coming and say that it was normal.<p>Yet people will still tell that worrying about AI taking over is like worrying about overpopulation on Mars, and that this is a problem at least 50 years out.
man I am fired up to watch tonights game...like I am fired up for UFC<p>there should be like a North American Go Nationals or something like that televised on twitch<p>Anyone putting money down on Sedol? He said it will be either 5-0 or 4-1 in his favor.
Lee Sedol should have played that top left 3,3 move earlier (at least before white covered it) WTF. Humanity is not longer at the top of the intelligence pyramid...