The author of the blogpost is pretending to be "one of us" (people that get lisp) - in "Lisp devotees (myself once included)" - but apparently they never understood it if they're still thinking that lisps' advantages "make Lisp into an unweidly, conceptual sledgehammer that’s often out of scale with the problem being solved.". The word "often" there also makes me think they're making a point without experience or proper evidence.<p>They're a lisper who broke their teeth in Clojure. I think when people like me disdain Clojure's lispness is because we think it doesn't really teach you the philosophy behind it. This is not an argument, just my perception.<p>They're also in favor of code censorship (let's remember that censoring is detrimental to creative processes):<p>"Giving any programmer on your team the ability to arbitrarily extend the compiler can lead to a bevy of strange syntax and hard-to-debug idiosyncrasies. "<p>I use Racket in production along with my team and may I suggest a humble, easy solution: one person makes a pull request, another reviews the pull, and if there are new macros introduced we can discuss it with the team to see if it's necessary. It's so simple. The blogpost author is making a big deal out of nothing. To prefer a language that doesn't allow that power because the author has a problem trusting others instead of choosing to communicate with their team members is appalling.<p>The author also keeps mentioning Python's "simplicity". How can anything be as simple as (function args)? I'm yet to understand what people that argue this point mean by "simplicity".<p>Then the author talks about static checks. "How can a static analysis tool keep up with a language that’s being arbitrarily extended at runtime?". Simple, do macro expansion before static type checking, as Typed Racket does.<p>They're also still playing with SQL DSLs too. I think that's such a waste of effort. SQL is already a DSL for talking to the DB. I don't want another layer because I'm not going to be manipulating SQL in my code, because "SQL" has nothing to do with the problem domain I'm working on. At that point any SQL queries have already been abstracted away inside functions that have meaningful names like associate-product-to-customer or whatever. I don't want to talk about SQL ever in my problem domain abstraction layer. Using SQL DSLs as an argument against macros with the angle of static type checking is a poor argument because SQL DSLs are usually for people that use mutable code anyway. I use Typed Racket's DB library and its querying functions work together with the type system to let me know if I'm not handling some potential kind of value that might come from the database.<p>The author then mentions Unix's consistency. Unix couldn't even decide on a standard notation for command line arguments. Then onto that fallacy that reality is object-oriented. Objects can't possibly be as composable as functions because Objects break down into methods (which are not composable, are not first class, etc) whereas functions (lambdas) can make up everything and can really be thought of as an atom for computation (i.e. Lambda Calculus).<p>Complaints like "Clojure has nine different ways to define a symbol" are moot. Pick one that your team likes and go with it. On to the next thing. Also, to argue against Lisps by arguing against Clojure is like arguing against democracy by arguing against the Democratic Republic of the Congo.<p>I do believe the blogpost author is severely misguided in their criticism. To say things like "Python wants the conceptual machinery for accomplishing a certain thing within the language to be obvious and singular" while ignoring the fact that lisps machinery is obviously much simpler and obvious and singular - again, (function args) - is disingenuous. It does make me believe that all their SICP reading was for nothing (I've only lightly skimmed SICP and I don't pretend to have read it).<p>The author does acknowledge (en passant) that certain Schemes (they don't identify which) don't suffer from this complexity (which makes the whole post look more like a criticism of Clojure). I'd invite the author to look into Racket.<p>They say that lisps "impose significant costs in terms of programmer comprehension". My experience is that if you divide your layers of abstraction correctly you will be able to work in the problem domain layer where nothing is obscure. And that layer is built from smaller in the layer below, parts that are also clear in what they accomplish because they only do one thing in their abstraction level. I've found that following this rule of only doing one thing per function makes for code that is easy to understand all the way from the bottom to the top layers. Programming this way, however, is the classic, boring way to write code [1], and because it's not a fad I guess people aren't too much into it.<p>Also to the point above, having already rewritten a significant portion of a Rails legacy app into Racket with the help of my coworkers, it seems that lisps introduce more understanding and shed more light onto the code, precisely because it makes everything explicit (we code in functional style so we pass every argument a function needs) and does away with "Magic" that Rails and rails fans like so much. When something gets annoying to write we implement our own "magic" on top of it, not in terms of silly runtime transformations that lesser languages like Ruby need to resort to, but through dynamic variables (Racket calls them parameters), monadic contexts, etc, i.e. things that can be checked at compile time.<p>And finally: "I think it’d be irresponsible to choose Lisp for a large-scale project given the risks it introduces". Well, the only risk I've personally witnessed is the very real risk of your coworkers starting do dislike more mainstream, faddish languages like Python and Ruby, because they don't allow the same freedom and simplicity and explicitness that lisp does (lisp has a long tradition of making things first-class, which consequently makes these things explicit).<p>[1] We use top-down design to decide what the interface for a given abstraction layer will look like, and bottom-up to decide which functions should be written in the layer below; then we cycle that process by refining the layer below through the same process of defining its interface top-down and then the layer below it as bottom-up. And we use algebraic type checking along with contracts to enforce post-conditions and properly assign the blame to the right portion of the code to speed up debugging. These are all old techniques.