Is it just my sonar beeping off the charts, or does anyone else hear the unmistakable signs of a submarine article? (1)<p>Apologies in advance for what may be perceived as a rant. I have a very low tolerance for clickbait-y BS like this as it pertains to my own passions as a lifelong chess devotee and former professional player.<p>First, the author of the article has no professional credibility in either chess or machine learning. He's a professor of math and a writer. No disrespect to either math or writing, I love and value both very highly, but they have very little to do with chess and machine learning per se.<p>The problems is he tries to present AlphaZero as "humankind’s first glimpse of an awesome new kind of intelligence," which is really a bit of a stretch unless you add the disclaimer that technically all AlphaZero does is play 3 types of perfect-information games quite well. This is undoubtedly a great accomplishment, particularly in the field of Go which many domain experts felt intuitively would not crack to our AI overlords before another 5-10 years of computing power/hardware advances at least.<p>(As someone who had the unfortunate label of "prodigy" applied in my youth due to earning the title of chess master at age 10, I consider myself somewhat of a domain expert in chess, and I was one of those people who got it wrong. I barely know the rules of Go, but intuitively I could comprehend that it was several orders of magnitude more complex than chess, and I was really hoping that the Go gurus would fend off the machines for longer. They didn’t. Hats off to DeepMind.)<p>But. With all due respect to DeepMind engineers for an impressive result in chess and go, it's a bit too early to start thinking of AlphaXXX as an "oracle" where all we can do is "sit at its feet and listen intently" while we would "not understand why the oracle was always right" and eventually be left "gaping in wonder and confusion."<p>(As an aside, the amount of pseudo-religious worship language in the piece is truly off the charts. I realize it stokes the passions, but it would be great if we could talk about AI’s true strengths and limitations without resorting to such histrionics. But I digress.)<p>Why is it too early to start bowing down to a new god? Well, for starters, they basically just brute forced the game of Go a bunch of years earlier than predicted, but this wasn't just a pure software win, this was also heavily connected to massive increases in computing power aka GPUs and ginormous cloud-based render farms.<p>Secondly, the author tries to make the leap from AlphaZero [good at 3 perfect-information games: chess, go and shogi] to what he calls "a more general problem-solving algorithm; call it AlphaInfinity". Note how he invokes the holy grail of AGI (Artificial General Intelligence) without actually using this term, which would set off alarm bells in, well, anyone who knew anything about AI who wasn't employed by DeepMind/Google.<p>Notice further how this massive leap from "machine that can play 3 games well" to "machine that can, you know, actually think about stuff like a human can, including these pesky 'edge cases' and un-trained-for scenarios that always confuse our algorithms despite their otherwise inhuman level of perfection".<p>One great example of such a case that may cause one to question these glorious predictions is a research paper titled “Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects” which shows how ML models consistently mistake a school-bus for a snowplow under the right (snowy) conditions (2). Far be it from me to dare bursting the bubble/reality distortion field of certain ML leaders and visionaries, but c’mon - a human child, once they truly learned how to recognize a schoolbus, would never mistake it for a snowplow, even if it was upside down.<p>This flaw doesn’t mean that we can’t update training data to handle these types of rotations, but it does mean that we have a lot of work to do before we can say that these ML models have in some way grasped the “essence” of “school-bus” or [insert-other-object] here in a deep symbolic way, and by "deep symbolic way" I mean "any way that a human child learns how to do reasonably quickly before moving on to other, exponentially harder tasks".<p>I could go on, but I won’t. Just in case my overall point isn’t clear:<p>1. AlphaZero is an unbelievably impressive accomplishment <i>within the limited subset of life that is [chess, go, shogi]</i><p>2. ML approaches, even in computer vision, have a long way to go before anything remotely resembling child-level human intelligence<p>3. Therefore, can we please please stop the marketing masquerading as news articles about DeepMind’s latest result. And if anyone at DeepMind is listening: your product is pretty sweet! It would be better strategically to simply let it speak for itself, without trying to frame it as AGI.<p>1. <a href="http://www.paulgraham.com/submarine.html" rel="nofollow">http://www.paulgraham.com/submarine.html</a><p>2. <a href="https://arxiv.org/pdf/1811.11553.pdf" rel="nofollow">https://arxiv.org/pdf/1811.11553.pdf</a>