Actually Common Lisp supports a gazillion of different programming styles.<p>The version with small functions, similar to the Haskell version:<p><pre><code> (subseq
(remove-if
(complement #'numberp)
(butlast list 3))
0 5)
</code></pre>
In above Common Lisp code, we use four different functions which do one task:<p><pre><code> * subseq sequence start &optional end => subsequence
* remove-if test sequence => result-sequence
* complement function => complement-function
* butlast list &optional n => result-list
</code></pre>
For different approaches see Common Lisp libraries like Series or Iterate... Iterate is LOOP on steroids.<p><a href="http://series.sourceforge.net" rel="nofollow">http://series.sourceforge.net</a><p><a href="https://common-lisp.net/project/iterate/" rel="nofollow">https://common-lisp.net/project/iterate/</a><p>> This is known as composability, or the UNIX philosophy. In Lisp a procedure tends to accept many options which configure its behaviour.<p>Check out the UNIX man for tail, grep, ... to see how strange above quote about 'unix philosophy' is. In reality Unix commands are programs with obscene amount of configuration options, sometimes piping data as text around, sometimes glued together by strange shell languages...
Is this really a philosophical difference? Granted, I've only used Clojure as far as Lisps go, but composability seems to be something that's emphasized.<p>Rather than<p><pre><code> (remove-if-not #'p xs :count 5 :start 3)
</code></pre>
It seems to me like most Clojure users would do something like<p><pre><code> (->> xs (drop 3) (filter p) (take 5))
</code></pre>
which is much closer to<p><pre><code> take 5 . filter p . drop 3</code></pre>
Common Lisp has these kitchen-sink functions and macros because it was a standard developed by a committee whose goal was to incorporate several popular implementations of Lisp that each had several decades worth of of cruft. Lisp was big enough business at the time that having several mutually incompatible versions of Lisp was making knowledge sharing and business difficult. And having a standard was important for many reasons.<p><i>update: spelling and...</i><p>The real philosophical differences between Haskell and Lisp are basically apples and oranges. Lisp's composition strategy is the recursive application of symbolic expressions... the famous EVAL and APPLY. I don't know enough about the theoretical underpinnings of Haskell to make any kind of apt comparison but my intuition suggests that to do so would be moot. They're just too different.
I'm not sure I agree at all with the premise of this article. Languages are a syntax they don't have "Philosophies." They may have design elements that reflect or support the philosophies of their designers(e.g. functions as first class citizens), but the author doesn't provide critique on what they believe these to be instead looking at a few std lib functions that some yahoo implemented.<p>Look at something like java. Java has a core set of design elements. It was built by a person with some philosophical leanings("Everything is a class", "Checked/Unchecked exceptions being different things", "no first class functions") Then it has standard libraries(i/o, collections, threading/synchronization) each of these was built by a person with there own set of biases and understandings. The original Collections implementation has lots of mutable data structures then later Josh Bloch decided that he didn't like that anymore and stopped adopting "immutability" as core design philosophy. Immutability was not previously considered important when evaluating a java implementation. What you end up with is a mish mash of different opinions that only gets more different as you go.<p>Some 3rd party libraries like Guava didn't jibe with the Philosophical leanings of the language itself and looked for work arounds. They went as far as to create their own implementation of functions as a first class citizen in a language that was expressly designed to omit them. Some commonly used Android libraries do this as well.<p>My point here is that "language" can mean lots of things. It can refer to the language itself, its run time, the syntax+runtime+community. What is idiomatic and on, on, on. People and groups of people have the philosophies. I'd like to see the author point to something a little more critical about the differences between these two concepts. This premise is a little muddled.
Some of the Lisp and Haskell code examples aren't doing the same thing. For example, these two do completely different things:<p><pre><code> (remove-if-not #'p xs :count 5 :start 3)
take 5 . filter p . drop 3
</code></pre>
I don't like Haskell, and it's not immediately obvious to me how to achieve what the Common Lisp is doing, so I won't bother, but one way of writing the Haskell in CL would be:<p><pre><code> (loop for x in (subseq xs 3)
when (p x) collect x into result
until (= (length result) 5)
finally (return result))
</code></pre>
Another option would be to use subseq and remove-if-not, etc, but without lazy evaluation, the loop version will be more efficient. And though some Lisp people dislike LOOP, I like that it's easy to read, if not always easy to write ;-)<p>About the topic of the article, though, to me this seems like less a philosophical difference than a result of Haskell not having easy to use default and optional parameters. There's currying, but it's not a great substitute, and it's a little awkward to use.<p>I like the Common Lisp way, even if it's crufty at times, because the keyword arguments to functions like remove-if-not and sort are easier for me to use than chaining a half dozen functions. I don't have to think about whether I need to call filter before or after take or drop, etc. At the end of the day, it's a personal preference, though.<p>Another advantage is that I don't have to "roll my own" for common idioms.
IMHO whoever wrote this article should change the title to "A philosophical difference between Haskell and Common Lisp". There are a lot of LISPs and not all of them follow Common Lisp's philosophy.
And us schemers would express it like this:<p>(cut take 5 (filter <> (drop 3 <>)))<p>or, without the cut macro:<p>(λ (pred list)
(take 5 (filter pred (drop 3 list))))<p>And yes, in most schemes, you can use λ as a synonym for lambda. Sometimes you have to define it first, though. Anyways, that looks a lot like the Haskell to you, doesn't it? It doesn't have the laziness, but other than that...<p>Oh! Oh! I almost forgot! you can also use the thrush combinator, if you use the clojurian package:<p>(λ (list pred)
(->> list (drop 3) (filter pred) (take 5)))<p>And yes, I think this is all the ways you can do this in scheme. CHICKEN specifically. With some various macros packages.<p>Oh, wait! I forgot we have function composition, too, but you'd have to use lambda or cut constantly to make the thing work because we don't have currying. So these are all the ELEGANT ways to make this in chicken scheme. With the cut SRFI. Or clojurian. And srfi-1, which is basically standard. and it's better than the examples given, because pred and list aren't pre-specified.
I'm not entirely sure that this is due to philosophical differences. The fact that Haskell is lazily evaluated makes writing functions that do only one thing much easier, since there is no performance hit for writing code like:<p><pre><code> take 5 . filter (not . p) . drop 3
</code></pre>
In a strictly evaluated language. This would involve iterating over the list three different times. (Kind of not really, since take 5 isn't going to be that expensive.)
The difference is not one of "philosophy", but rather of <i>using the right types</i>. Haskell's so-called "list" type constructor is actually a type constructor of <i>streams</i>. Unsurprisingly, streams are much better than lists if you want to do stream processing.
Perhaps it's just me, but I don't see that Haskell and Lisp are that similar, other than...<p>1. They're both programming languages.<p>2. They both allow you to pass functions as arguments to other functions.<p>Am I missing something here? Why are the two linked? Is it because Lisp is seen as the birthplace of functional languages (because of point 2)?
I'm confused by the last example. There are no elements greater than 5 in the list (1 2 3 4).<p>Also, I'm not sure why they used takewhile instead of filter in the last haskell part.
The biggest difference between Haskell and Lisp is that Lisp is multi-paradigm, while Haskell is not. Haskell is more opinionated, and makes a bunch of decisions for you (that you can choose to work around/sugar/hack until Haskell looks like something else/does what you want).<p>All the other things that Haskell comes with - strong typing, monads, lazy evaulation, can be written into common lisp, but whether you need them is often questionable.
> <i>In Lisp a procedure tends to accept many options which configure its behaviour. This is known as monolithism, or to make procedures like a kitchen-sink, or a Swiss-army knife.</i><p>This is the case in some areas of the Common Lisp language; it is not true of the Lisp, as a family of dialects.<p>There are plenty of examples of Lisp functions or operators that just do one thing: `cons`, `car`, the `lambda` macro operator.<p><pre><code> ;; CL
(remove-if-not #'p xs :count 5 :start 3)
;; Haskell
take 5 . filter p . drop 3
;; TXR Lisp: a dialect with ties to CL:
(take 5 [keep-if p (drop 3 list)])
;; Compose the functions using opip macro
;; (result is a function object):
(opip (drop 3) (keep-if p) (take 5))
</code></pre>
TXR Lisp's library functions don't have the :count, :start and whatnot. In fact, there are no keyword parameters, only optionals. If you want to default an optional on the left and specify a value of one on the right, you can pass the colon keyword to explicitly default:<p><pre><code> (defun opt (a : (b 1) (c 2)) ;; two optional args
)
(opt 4 : 5) ;; b takes 1, c takes 5.
</code></pre>
The colon is just the symbol whose name is the empty string "", in the keyword package. It makes for nice sugar and has a couple of uses in the language. Note how in the defun it separates required args from optionals.<p>(Anyone else cringe at "UNIX philosophy"; how silly! This is the Unix philosophy: let's reduce everything to a string in between processing stages and parse it all over again, with simplifying assumptions that it always has exactly the format we are looking for without actually validating it.)
Real general question about function composition - when you're comfortable with it, how do you think it to yourself, or say it to yourself, like what's your shorthand mental model?<p>I was able to really intuitively get comfortable with unix piping a long time ago, so instead of:<p>take 5 . filter p . drop 3<p>I'd be thinking, ok, take a thing, drop the first three, grep (or whatever), now take the first five. It felt intuitive because each step solved a problem, and then I'd move forward into the future, and have to only think of one additional solution (head -5) assuming I did the previous steps right.<p>Meanwhile, take 5 . filter p . drop 3 is in the opposite order, from right to left.<p>Maybe I'm just saying that left association is easier to think about than right association. Don't you feel that weird recursive bump in your head, the increasing mental stack, when you are dealing with function composition and right association?
Lisp actually does have stream fusion, but it has it in the form of a library![0]<p>[0] <a href="https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html" rel="nofollow">https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html</a>
Haskell has strong typing and lazy evaluation, which makes it easy for functions to <i>take only one argument</i> at a time. Although a function could take a tuple parameter, it's usually rewritten to take each component of the tuple as a separate parameter, which makes the strong typing and built-in currying simple, higher structures like monads possible, and a syntax to suit this style. Lisp functions and macros OTOH <i>must be variadic</i> to enable the homoiconicity of the language. It's therefore much more difficult for parameters to be typed, or to curry them.<p>These two styles make Haskell and Lisp mutually incompatible, unless they use clunky addons like Template Haskell macros or Typed Clojure annotations. The pure form of each language, however, is based on two mutually exclusive foundations, i.e. strongly typed auto-curried parameters vs variadic untyped macro parameters. The poster child of each language, i.e. monads and macros, thus also don't mix well with each other.
Having worked on one of the largest common lisp projects I have to say this is spot on. And monolithism it's not just visible on level of functions it's visible on higher level. Common lisp nudges you to write monolithic applications and it's crucial to keep many details in one head (in case of big project - many heads).<p>Also I think Clojure is scheme, so overall approach is different.
Minor nitpick. The loop at the end could make the range by 'for i from 1 upto 4' instead of a list literal to more similar to the haskell code.
You have to be clear about which Lisp. Clojure and Scheme are Lisps, and their focus is very much towards simplicity and the use of combinators.<p>The complex, do-it-all-in-one-huge macro is a Common Lispism, not a Lispism.<p>Secondly, Clojure, like Haskell, is focused on sequence abstractions, not lists. Sequences can include collections, streams, observables, sockets and many other kinds of process that can be modelled as an event stream.
The main difference is: it is trivial to build an efficient Haskell implementation on top of Common Lisp, keeping interoperability with the rest of the system. And it is impossible to do it the other way around, to build a Lisp on top of Haskell.
Or you could use Shen and spend less time philosophizing<p><a href="http://www.shenlanguage.org/" rel="nofollow">http://www.shenlanguage.org/</a>