An odd question for the article to pose really. The art of telling a story involves change which is far more compelling if it creates at least some conflict and confusion[1], and if AI or artificial mind enhancements are key, they're more interesting as a cause than a solution to the problem[2], and you'd have to be writing for an audience of pretty hardcore nerds to bother pointing out the background music was composed by a creative computer. When it comes to assessing how people might react when blessed with superaugmented intelligences, there are plenty of cautionary examples of people with natural but notable extremes of intelligence who've been tripped up by crippling vulnerabilities. The history of experiments on human minds is pretty grisly too.
Given plenty of reasons to believe the acceleration of technical progress won't lead to blissful happiness, and the tendency for blissful happiness to be a dull storyline anyway, why <i>wouldn't</i> SciFi continue to be glum about AI<p>For all the article's comments about the "mindboggling" potential of Moore's law, my word processor looks about the same as it did nineteen years ago, and computers still suck at simple games like Go that, unlike chess, can't be brute forced. I'm grateful to Google for making finding information that bit easier, but I'm even more grateful for humans for creating or curating the content in the first place.<p>[1]Compare "Do Androids Dream of Electric Sheep" with Edward Bellamy's utopian "Looking Backward". Both versions of the early second millenium are pretty far from the mark (indeed even the <i>technology</i> of the latter book is more arguably more accurate despite it being written in the nineteenth century) but only one of these books is considered to be a riveting read that says profound things about human nature. Similarly, there's a reason some of HG Wells' work is lauded and some of it's laughed at.<p>[2]Deus Ex Machinas suck and "luckily the AI figured it out" is definitely a loss of imagination and nerve.