The claim is apparently that superhuman, savant-level intelligence could be confined to a domain of knowledge and not risk becoming generalized intelligence. I'm skeptical.<p>If you're "only" superintelligent at language translation, or writing movies, or chess, I suspect as we ascend the tiers of increasingly super superingelligences, there's a depth of informational, structural understanding that avails itself of abstract meta principles, and meta-meta principles, and meta-meta-meta principles, and on to infinity. And that at a sufficiently high level of abstraction, something about be brilliant at translating the subtle irony of a Shakespearian sonnet into a dead tribal language is also at play in weighing strategic options in an incredibly complicated game of chess, is also at play in reading culture and finding out what kind of movie will be most successful at the box-office.<p>I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions and be transferable to other domains, the way there are general principles of manufacturing that apply to multiple products. And from a sufficiently advanced perspective, "solving" chess and "solving" Shakespearan sonnet translation would look as similar to each other and panting a car red vs painting a car blue.