A familiar example is “goto” versus structured programming. “goto” is supremely expressive—conditional jumps are sufficient to express all of the usual structured programming constructs. But sacrificing “goto” enables much more powerful reasoning about programs because it restricts the shape of the control flow graph to common patterns: “while”, “do…while”, “for”.<p>One of the core features of functional programming is “recursion schemes”, that is, generalising patterns of data flow. While “goto” lets you manipulate control flow arbitrarily, it turns out that you don’t usually want arbitrary control flow—and the same is true for data, so rather than “for” loops, it’s often easier to think in terms of the common patterns: “map”, “reduce”, “filter”.
While there is some truth to the author's point, and we can see the effect in in a lot in practice, he makes the mistake of confusing "useful" with "analyzable in FP terms".<p>This exact point (comprehensibility vs. power) is why Smalltalk is so amazing: it adds a lot of power while at the same time being <i>more</i> comprehensible, not less. That's no small feat, and IMHO one that is both under-appreciated and under-analyzed.<p>EDIT: Of course, see "The Power of Simplicity" <a href="http://www.selflanguage.org/_static/published/self-power.pdf" rel="nofollow">http://www.selflanguage.org/_static/published/self-power.pdf</a>
This is really natural in mathematics and logic—it's the tension between consistency and completeness.<p>Essentially, you have a tension between laws a models. A larger number of laws means more reasoning power at the expense of fewer models (possibly 0 in which case you're inconsistent). A larger number of models means greater realizability but fewer laws (possibly 0 in which case you've gained nothing through this exercise).<p>It's always a game of comparative linguistics—does this abstraction pay its way? Well, without it we have this set of laws and with it we gain this new set. Are things in balance such that the abstraction is realizable and the laws are meaningful? That's your tradeoff.
I like what the author is suggesting. It's thought-provoking and is consistent with many experiences I've had in life.<p>An interesting analog in the arts is the notion that a chosen or imposed constraint can provoke creativity and open up new possibilities.
For programming languages, a paradigm allowing to be "as expressive as necessary" is LOP[1] (Language Oriented programming).<p>With it you can basically choose your level
of expressiveness while you develop your program (with different levels for different parts of your program). The frustrating trade-off between expressiveness and readability[2] is the main reason I love this paradigm. Unfortunately LOP has never been really trendy. Hope it will change soon.<p>[1] <a href="http://en.wikipedia.org/wiki/Language-oriented_programming" rel="nofollow">http://en.wikipedia.org/wiki/Language-oriented_programming</a><p>[2] For programming languages I would rather speak of readability (as opposed to analyzability) because a program is often not written in stone but "alive" (it's modified, enhanced over time...). It probably does not apply to the other fields discussed in the article though.
Using this definitions, I think we can define technological progress as the discovery of new concept that are:<p>- More powerful than previous concepts but with the same useful properties.<p>- Or as powerful as the previous concepts but with more useful properties.
I really like this post; it's a nice look at at issue that comes up over and over again in designing systems.<p>One thing that occurs to me is that it relates to what part of a system I identify with. For example, I think very differently about a dictatorship if I imagine myself the benevolent dictator than if I think of myself as a citizen.<p>I also feel very differently about the different sorts of system depending on what I'm up to. When I'm mainly exploring a problem space, I want powerful tools, but when I'm building something to last, I want something safely constrained. In the former, I place my identity with the lone author, where the power to do the unexpected is vital. In the latter, I identify with those maintaining and debugging a system, where the power to do anything is a giant pain.
"That is, the more expressive a language or system is, the less we can reason about it, and vice versa. The more capable the system, the less comprehensible it is."<p>What makes these assertions true? Research/data/polls etc would be helpful. It is hard to accept such wide-ranging claims without some proof.<p>Also, could someone please post the effective definitions of "expressiveness" and "capability" as used in the post ?<p>Thanks.
Seems related to the Rule of Least Power: <a href="http://en.wikipedia.org/wiki/Rule_of_least_power" rel="nofollow">http://en.wikipedia.org/wiki/Rule_of_least_power</a> , <a href="http://c2.com/cgi/wiki?PrincipleOfLeastPower" rel="nofollow">http://c2.com/cgi/wiki?PrincipleOfLeastPower</a>
That link <a href="http://www.logicmatters.net/resources/pdfs/Galois.pdf" rel="nofollow">http://www.logicmatters.net/resources/pdfs/Galois.pdf</a> (syntax - semantics as Galois Connections) in the comments seems pretty interesting.