I suspect that it is easy to fixate on fantasies of massive technological leaps forward, because the truly revolutionary changes that underpin our lives are like air: so common that we overlook them all the time.<p>If you were to sit down with a person from, say, 1750, and wanted to explain the modern world, you would probably tell them about things like antibiotics, cars and planes, snapchat, that sort of thing.<p>All the big stuff that we think defines the modern era.<p>What would probably really impress the hell out of them would be a supermarket, though.<p>To a person from that time, the idea of being able to eat fresh peaches in the dead of winter would be just as magical as being able to instantly translate Russian into English, and yet it probably wouldn't occur to us to bring it up because fruit at the supermarket is just... obvious.<p>So as we look forward, the question is, what will be obvious to people ten or twenty years from now?
I wonder if the great leap he mentions (1870-1970) is mostly from the discovery of electromagnetism.<p>It would seem to be magical. Here's this thing that allows you to move huge amounts of power over vast distances, but deposit it in whatever tiny portions you like, in whatever room you want, and it's so easy you run the wires along your existing walls.<p>The same force allows you to send both signal and power through thin air. And if you don't want it there, you can shield from it.<p>Or you can make a motor quite elegantly by winding some wires.<p>And you can manipulate it so finely that you can do calculations with teeny tiny amounts of it. And store the results.<p>But one thing I was wondering was how much it depends on basic science. Now that we've discovered EM, have we exhausted the basic knowledge about the world? Is every tech going forward just refinements on particular aspects of nature, or is it unlimited how we combine things?
> <i>Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human?</i><p>I remember reading in Scientific American a few years back, about a study that was very striking and against common intuition. In manufacturing, robots had overtaken humans at a particular task by a long shot. They were able to perform the completion of assemblies much quicker and with higher accuracy; humans coming in afterwards and applying finishes that required small hands, light touches, or hard to explain measurements.<p>Robots > Humans. Humans finished what robots couldn't.<p>What wasn't foreseen, however, was that there was an additional step to shave off even more time. Given some artificial intelligence, the robots could actually tell the humans when, and how, to do assembly <i>during</i> and <i>as part of</i> the assembly process. The robot figuring out the most efficient assembly, and instructing the human to do what it couldn't, was a third step of assembly efficiency not foreseen.<p>It turned out:<p>Robots + Humans > Robots > Humans.
The article mentions "Jennifer", which is the real-world implementation of Marshall Brain's "Manna 1.0". Here's a video of it in action.[1]<p>This is another example of "machines should think, people should work.". The computer is the boss. That may be the future.<p>[1] <a href="https://www.youtube.com/watch?v=oC-ReBX0icU" rel="nofollow">https://www.youtube.com/watch?v=oC-ReBX0icU</a>
FTA: "Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human?"<p>The article was pretty good overall, but this one's a pet peeve of mine. We're offloading our thinking to computers and this magical "AI" in hopes that all that hard work (and possibility for error!) is going to be avoided. But we're going to find out--or perhaps not, as we slip below the dumbness horizon, no longer able to formulate self-reflective thoughts ourselves--that thinking is joyful, fun, glorious, not laborious or error prone.<p>And as "soft" AI advances, we just can't admit to ourselves that this, too, will fail to deliver. To wit: if something absolutely must be translated between language A and language B, in 2017 a bilingual person will still _absolutely mop the floor_ with machine translation.<p>Google translate:
"Zum Witz: Wenn irgend etwas absolut zwischen Sprache A und Sprache B übersetzt werden soll, wird im Jahr 2017 eine zweisprachige Person den Boden mit der maschinellen Übersetzung noch niemals stoßen."<p>Bahaha.
When you're writing fiction, you have to choose which elements to emphasize to get your point across. Blade Runner wanted to be about questions along the boundary of "What is it to be human?"<p>It didn't want to be about everyday technology (though it had to pay some lip service to that). So it didn't want to invest the reader's mental energy in the replacement for phone booths.<p>(I think the author's point still stands, but that example doesn't support it).
<i>Blade Runner</i> is 40s noir. The characters are noir. The wardrobe is noir. Rachael's haircut is noir. The cinematography is noir. Deckard is an ex-cop.<p>The phone booth makes perfect sense.
> plausible that LA would look much the same<p>This is actually something I liked in contrast to earlier futurism. If I look around me here in Europe I see that buildings rarely get removed once they stand. The last time the face of cities here changed was after the Second World War. If I look at pictures from the 60s rarely anything changed, except maybe the introduction of some pedestrian zones.
Intro to The Napoleon of Notting Hill, by G.K. Chesterton:<p>"THE human race, to which so many of my readers belong, has been playing at children's games from the beginning, and will probably do it till the end, which is a nuisance for the few people who grow up. And one of the games to which it is most attached is called, "Keep tomorrow dark," and which is also named (by the rustics in Shropshire, I have no doubt) "Cheat the Prophet." The players listen very carefully and respectfully to all that the clever men have to say about what is to happen in the next generation. The players then wait until all the clever men are dead, and bury them nicely. They then go and do something else. That is all. For a race of simple tastes, however, it is great fun."
If you enjoyed this article, Tim Harford (the author) also has a podcast series called 50 Things That Made The Modern Economy[0] which expands on pretty much every one of the inventions mentioned in the post. The episodes are roughly 10 minutes long, well produced, and always cite the source material. Worth listening to.<p>[0] <a href="http://www.bbc.co.uk/programmes/p04b1g3c/episodes/downloads" rel="nofollow">http://www.bbc.co.uk/programmes/p04b1g3c/episodes/downloads</a>
The Jennifer unit is straight out of the short story, "Manna"[1].<p>[1] <a href="http://marshallbrain.com/manna1.htm" rel="nofollow">http://marshallbrain.com/manna1.htm</a>
Payphones could make a comeback, and the "payphone" was really a public videophone.<p><a href="https://www.youtube.com/watch?v=D-YBYzo4XUY" rel="nofollow">https://www.youtube.com/watch?v=D-YBYzo4XUY</a><p><pre><code> Deckard calls Rachael on a public videophone.</code></pre>
from script: <a href="http://www.trussel.com/bladerun.htm" rel="nofollow">http://www.trussel.com/bladerun.htm</a>
When I am thinking about technological advance in my lifetime, the first thing that comes to mind is Martin Riggs in Lethal Weapon parking on a bridge and pulling his mobile phone out of the trunk. It was so cool...<p>The reality, 35 years later, is the cheap, ubiquitous, high-speed, mobile access to the world's knowledge and in fact, access to most of humanity. When I press 'add comment', nearly 4 billion human beings can immediately upvote it.<p><a href="http://www.internetworldstats.com/stats.htm" rel="nofollow">http://www.internetworldstats.com/stats.htm</a>
I'm torn. I like the essay, but it is just as excusable to know that using people to act the part of androids necessitates them being extra life like.<p>Similarly, the point was a metaphor for the blurring between real and fake. So, easily distinguished fake would ruin that.
Obligatory plug for everyone to read The Shockwave Rider by John Brunner. It was published in 1975 and nails the technology of the 90s so accurately that it's quite freaky to read.
Part of the problem is that we actually don't know what tomorrow will be like, so it's hard to pick the right soothsayer, if you will. Maybe another movie got the smartphone thing right but you never watched it.<p>Rather than Blade Runner, I think Children of Men will turn out to be a much more prescient vision of what's to come. (Not the main plot line but everything surrounding it.) But like I said... we don't know.
Technology almost never advances directly. Predicting the future would be easy if things simply improved by 1% per year across the board.<p>Tech advances are always a series of lateral moves. Where the last innovation allows for the next new thing. Or more commonly a massive expansion of an existing concept.
<i>Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human?</i><p>I could think of a few scary ones.
What I think we often miss about technology is the underlying point here; no technological progress without social progress. We need a political movement to define a new social contract that ensures broad support for implementation of disruptive technologies. This is the essential challenge for our generation.